Home>

Refer to the Python source code
For two images (same layout except for the numbers listed)
After extracting each feature point with the akaze detector, the feature points are narrowed down by the K-nearest neighbor method, and then a transformation matrix is ​​created to perform image conversion.

I think this is a really rudimentary question, but I would appreciate your help.

Since I have no knowledge of Python, I am working on it from the description about OpenCv.

I narrowed down the feature points and converted the coordinates before and after the conversion to Mat via the Point2f array.
I get an error when I perform Afine conversion.

(M0.type () == CV_32F || M0.type () == CV_64F)&&M0.rows == 2&&M0.cols == 3
Corresponding source code

Referenced code

for m, n in matches:
        if m.distance<ratio * n.distance:
            good.append ([m])
mtx = cv2.estimateAffinePartial2D (
    np.float32 ([kp1 [m [0] .queryIdx] .pt for m in good]). Reshape (-1, 1, 2),
    np.float32 ([kp2 [m [0] .trainIdx] .pt for m in good]). Reshape (-1, 1, 2)) [0]
warped_image = cv2.warpAffine (img1, mtx, (img1.shape [1], img1.shape [0]))

Code in error

// descriptor1 is the feature point of image A, descriptor2 is the feature point of image B
matches = matcher.KnnMatch (descriptor1, descriptor2, 2);
List<Point2f>p1 = new List<Point2f>(), p2 = new List<Point2f>();
foreach (var m in matches)
{
    double dis1 = m [0] .Distance;
    double dis2 = m [1] .Distance;
    if (dis1<= dis2 * match_per)
    {
        p1.Add (key_point1 [m [0] .QueryIdx] .Pt);
        p2.Add (key_point2 [m [1] .TrainIdx] .Pt);
    }
}
Mat matHenkan = Cv2.GetPerspectiveTransform (p1, p2);// 3 * 3 * CV_64FC1
// The state here is as follows
// Original image: MatMoto (1753 * 2480 * CV_8UC3), Result: MatKekka (new Mat ()) Transformation matrix: matHenkan (3 * 3 * CV_64FC1)
Cv2.WarpAffine (MatMoto, MatKekka, matHenkan, Matread.Size ());
// This time, image B (MatMoto (1753 * 2480 * CV_8UC3))
// I am doing it according to image A (1764 * 2478 * CV_8UC4).
What I tried

Originally, I would like to use estimateAffinePartial2D,
I didn't know what to pass to which parameter.

Looking at this error during affine conversion, the number of dimensions of the matrix and the channel? I felt that was a problem

matHenkan.ConvertTo (matHenkan2, MatType.CV_8UC3, MatMoto.Cols, MatMoto.Rows);


And the transformation matrix was further transformed, but the transformation matrix became (3 * 3 * CV_8UC1).
The error content after Afine conversion was the same.

I also tried to get NumSharp from Nuget while looking for a solution,
Error Package'NumSharp 0.20.5' could not be installed. You are trying to install this package in a project that targets'.NET Framework, Version = v4.5.2', but the package does not contain assembly references or content files that are compatible with that framework. Contact the creator of the package for more information.

Supplementary information (FW/tool version, etc.)

C # OpenCvSharp3 (v4.00) .NET Framework (4.5.2)

  • Answer # 1

    The result is not perfect,
    Regarding the part of the agenda, "Create a transformation matrix from the feature points of two images."
    I solved it by the method of answering, so I will leave it as a self-solve.

    I'm sorry for doing something complicated.

  • Answer # 2

    By using InputArray.Create (), I was able to use Cv2.EstimateAffinePartial2D.

    List<Point2f>p1 = new List<Point2f>(), p2 = new List<Point2f>();
    Mat matHenkan = Cv2.EstimateAffinePartial2D (InputArray.Create (p1), InputArray.Create (p2))


    However, the conversion result did not go well.

      mtx = cv2.estimateAffinePartial2D (
            np.float32 ([kp1 [m [0] .queryIdx] .pt for m in good]). Reshape (-1, 1, 2),
            np.float32 ([kp2 [m [0] .trainIdx] .pt for m in good]). Reshape (-1, 1, 2)) [0]


    In the reference source Python source, it is np.float32 (). Reshape (-1, 1, 2), so is that the problem?