Home>

Afine conversion is performed on the read image for the template of the same layout.
The converted image will be reduced and will move to the upper left.

I created it based on the Python source, but it seems that the Numpy related parts have not been completely reproduced.

I would appreciate it if you could teach me any solution.

The problem i am having

1. 1. Read (3307 x 2338) * Model is (2480 x 1753)

2. Result (3307 × 2338)

3. 3. Image of manually cutting out the necessary part from the result (1819 x 1310)

Corresponding source code
// MatHinagata Stationery image = image A
using (Mat MatHinagata = ReadToMat (styleClass.FileFullPath))
{
    Mat MatKekka = new Mat ();
    KeyPoint [] key_point1;// Features of the comparison source image
    KeyPoint [] key_point2;// Feature points of the comparison destination image
    Mat descriptor1 = new Mat ();// Features of the comparison source image
    Mat descriptor2 = new Mat ();// Features of the comparison destination image
    DescriptorMatcher matcher;// Matching method
    DMatch [] [] matches;// Array that stores matching results between feature vectors
    // MatYomikomi Read = Image B
    using (Mat MatYomikomi = ReadToMat (imgB))
    {
        Comfunc.ReadXMLToObject (System.Windows.Forms.Application.StartupPath + @ "\ 01" + tgtFrlb.StyleID.ToString ("00") + ".xml", out loadAry);
        // Feature detection and feature vector calculation
        akaze.DetectAndCompute (MatHinagata, null, out key_point1, descriptor1);// Stationery image = image A
        akaze.DetectAndCompute (MatYomikomi, null, out key_point2, descriptor2);// Read = Image B
        matcher = new BFMatcher ();
        matches = matcher.KnnMatch (descriptor1, descriptor2, 2);
        const double match_per = 0.8;// Matching threshold
        List<Point2f>p1 = new List<Point2f>(), p2 = new List<Point2f>();
        // Scrutiny of detected features
        foreach (var m in matches)
        {
            double dis1 = m [0] .Distance;
            double dis2 = m [1] .Distance;
            if (dis1<= dis2 * match_per)
            {
                p1.Add (key_point1 [m [0] .QueryIdx] .Pt);
                p2.Add (key_point2 [m [1] .TrainIdx] .Pt);
            }
        }
        // Transformation matrix generation
        Mat matresult2 = Cv2.EstimateAffinePartial2D (InputArray.Create (p1), InputArray.Create (p2));
        // Convert image
        Cv2.WarpAffine (MatYomikomi, MatKekka, matresult2, MatYomikomi.Size ());
    }
}
#Excerpt only where it cannot be completely reproduced.
#Transformation matrix generation
mtx = cv2.estimateAffinePartial2D (
    np.float32 ([kp1 [m [0] .queryIdx] .pt for m in good]). Reshape (-1, 1, 2),
    np.float32 ([kp2 [m [0] .trainIdx] .pt for m in good]). Reshape (-1, 1, 2)) [0]
#Convert image
warped_image = cv2.warpAffine (img1, mtx, (img1.shape [1], img1.shape [0]))
What I tried

I tried the size specification at Cv2.WarpAffine with the read image and template, but it didn't work.

Supplementary information (FW/tool version, etc.)

C # OpenCvSharp3 (v4.00) .NET Framework (4.5.2)

  • Answer # 1

    There was an error in the code, and when I fixed it, it improved.
    For the time being, it will be self-solving.

    Thank you for your help.

    1st place, descriptors 1 and 2 are reversed

    (Before correction)

    matches = matcher.KnnMatch (descriptor1, descriptor2, 2);


    (Revised)

    matches = matcher.KnnMatch (descriptor2, descriptor1, 2);

    2nd place key_point1 and 2 are reversed

    (Before correction)

    p1.Add (key_point2 [m [0] .QueryIdx] .Pt);
    p2.Add (key_point1 [m [1] .TrainIdx] .Pt);


    (Revised)

    p1.Add (key_point1 [m [0] .QueryIdx] .Pt);
    p2.Add (key_point2 [m [1] .TrainIdx] .Pt);