Home>

Is there a way to find a quadratic function from multiple coordinates like the least squares method?
I'm good at integration, but I'm not good at differentiation. Please tell me if there is a way to find the quadratic function y = ax ^ 2 + bx + c from multiple coordinates without using differentiation!

I'm sorry now, but what is the principle of the least squares method for finding an approximate expression?
For example, is it possible to find the average by adding all coordinates and dividing by the number of coordinates?
It may not have been transmitted well, but I want to know the basics.
I found a wiki, but I couldn't keep up with my head because it was difficult to explain.

Edit 8/28
I'm thinking about how to proceed with the trigonometric function, thinking about the method in a progressive manner, but would you please dating me if you like?
I'm just curious, but I think y = (X ^ 2 + 3X + 1) ^ 4 can't find the slope without using the definition of differentiation. Of course, there are some people who think that if you don't use a definition because I want to find the slope by differentiation, you might find it.
If there is such a method, I would be happy if you could tell me.
Although it is a simultaneous progress, I'm also thinking that an algorithm that solves the slope ofy = (X ^ 2 + 3X + 1) ^ 4 without using the definition of differentiationcan be programmed.

  • Answer # 1

    Hello.

    The least squares method can be used with nth order polynomials.

  • Answer # 2

    The one written in the comment of another answer:

      

    As a simple means that is not a least square, we can think of "selecting appropriate three points, solving the coefficients of the quadratic function, and evaluating the results using other point clouds". Go again and pick the best one ... (can you call it RANSAC?)

    I implemented

    very primitively. (The code below is C ++, but if you know what you are doing, you can write it in C)
    Depending on the data and required accuracy, the coarse Coarse coefficient value can be obtained by such a rough method.
    (In the code below, OpenCV's cv :: solve () was used to calculate the coefficient {a, b, c} from three points, but it is only necessary to add an appropriate simultaneous equation processing)

    When selecting three points, choose combinations that are somewhat distant from each other

    Use only data points that are more than a certain distance from the three points in the evaluation

    I think if you put ingenuity such as

    , the result will be better.

    // Secondary function y = F (x;a, b, c) = a * x * x + b * x + c
    inline double F (double x, double a, double b, double c)
    {return (a * x + b) * x + c;}
    // main
    int main (void)
    {
        // for random numbers
        std :: random_device seed_gen;
        std :: mt19937 MT (seed_gen ());
        // Create a data point group with random numbers
        const int N = 50;// number of data points
        double X [N], Y [N];// Data point coordinates
        {
            // Coefficient of correct answer
            const double TrueA = 0.5;
            const double TrueB = 4.0;
            const double TrueC = 3.33;
            // PI
            const double PI = acos (-1.0);
            // Error amount Max
            const double MaxErr = 5;
            //
            std :: uniform_real_distribution<double>Dist;
            for (int i = 0;i<N;++ i)
            {
                // Find a point on the correct formula ...
                X [i] = Dist (MT) * 100-50;
                Y [i] = F (X [i], TrueA, TrueB, TrueC);
                // Give error to the data and use it as a data point
                double ErrR = Dist (MT) * MaxErr;
                double ErrTheta = Dist (MT) * 2 * PI;
                X [i] + = cos (ErrTheta) * ErrR;
                Y [i] + = sin (ErrTheta) * ErrR;
                std :: cout<<X [i]<<","<<Y [i]<<std :: endl;
            }
        }
        // Repeat 3 times from the data points to find the coefficient and find the best coefficient
        double ResultA, ResultB, ResultC;// Result coefficient
        {
            const double deltaTresh = 2.5;
            cv :: Matx<double, 3,3>A;
            cv :: Matx<double, 3, 1>b;
            cv :: Matx<double, 3, 1>x;
            int Indexes [N];
            for (int i = 0;i<N;++ i) {Indexes [i] = i;}
            int CurrScore = -1;
            for (int i = 0;i<N * 3;++ i) // repeated a number of times
            {
                std :: shuffle (Indexes, Indexes + N, MT);
                // I'm using OpenCV because it's cumbersome to write the coefficient solving process on my own
                for (int j = 0;j<3;++ j)
                {
                    double X_ = X [Indexes [j]];
                    double Y_ = Y [Indexes [j]];A (j, 0) = X_ * X_;
                    A (j, 1) = X_;
                    A (j, 2) = 1;
                    b (j) = Y_;
                }
                cv :: solve (A, b, x);
                // Evaluate the goodness of the coefficients using the remaining data points
                int Score = 0;
                for (int j = 3;j<N;++ j)
                {
                    double X_ = X [Indexes [j]];
                    double Y_ = Y [Indexes [j]];
                    double delta = fabs (Y_-F (X_, x (0), x (1), x (2)));
                    if (delta<= deltaTresh) {++ Score;}
                }
                if (Score>CurrScore)
                {
                    CurrScore = Score;
                    ResultA = x (0);ResultB = x (1);ResultC = x (2);
                }
            }
        }
        std :: cout<<ResultA<<","<<ResultB<<","<<ResultC<<std :: endl;
        //
        std :: cin.ignore ();
        return 0;
    }

  • Answer # 3

    Since you don't use differentiation, how about using GA (genetic algorithm)?
    If (a, b, c) is used as a gene and the fitness is "average of square errors with multiple coordinates", I think that a certain degree of solution can be obtained.

    For GA, for example, "Real-value GA frontier" might be helpful.

  • Answer # 4

      

    Do not use differentiation etc.

    I don't understand what

    means.
    The program only needs to implement the necessary formula (as a result of deriving the necessary mathematical formula on the desk), but it says "I don't want to generate differentiation in the desktop calculation to derive the necessary formula!" Is it a story?

    Now, if it is a quadratic function fitting by a simple least square method ...

    I think that it will be the form to find on the desk to the form of the eigenvalue problem, but in this case, it will be necessary to find a library that solves the eigenvalue problem.

    If you want to solve the above problem by numerical search, if you use a method like the steepest descent method or Newton method, it is generally necessary to differentiate the objective function (desktop) to be minimized. If this is cumbersome, you can use numerical differentiation.

    If a non-gradient method is used as the search method, there is a possibility that the differential calculation on the desk is not necessary. (For example, Nelder et al.'S Downhill Simplex method)

  • Answer # 5

    A brief description of least squares (straight line)-a beautiful story of high school math
    This link is about straight line approximation, but the basic principle does not change even with curved lines.
    In short,

    Assuming straight line y = ax + b

    For each point, find the difference in y-coordinate (in other words, error) from the point on the straight line with the same x-coordinate.
    Calculate the sum of the squares

    Find the coefficients a and b of the line equation so that the sum is minimized

    It means

    . 3. Use differentiation for the "minimum" condition in 3.
    Although it is not good at differentiation, please do it because you only do differentiation of polynomials.

    For appending

      

    Algorithm for solving the slope of y = (X ^ 2 + 3X + 1) ^ 4 without using the definition of differentiation

    Assuming that the x coordinate of the point whose slope is to be calculated is a, the difference from the y coordinate with a point slightly away from a (such as a + 0.01) is taken.
    Divide by x-coordinate difference
    For example, if you want to find the slope at a = 1,
    [{(1.01) ^ 2 + 3 * 1.01 + 1} ^ 4-(1 ^ 2 + 3 * 1 + 1) ^ 4]/0.01
    Calculate
    Strictly speaking, this is the definition of the differential coefficient, and the differentiation of the formula is the nature of the differentiation or the story of the differentiation method

    Added 8/29

    If you don't want to use differentiation in the least-squares method 3, you can considersquare completion.

    pA ^ 2 + qB ^ 2 + r-2sA-2tB + 2uAB
    = p {A ^ 2 + 2 (uB-s)/p A} + qB ^ 2-2tB + r
    = p {A + (uB-s)/p} ^ 2 + (quadratic of B)
    The following is completed for B, and A and B are determined so that the squared content becomes 0

    However, it is more cumbersome than differentiation