<p>我也迟到了,但是我为Mac OS X安装了OpenCV 2.4.9,而且<code>drawMatches</code>函数在我的发行版中不存在。我也尝试过用<code>find_obj</code>的第二种方法,但这对我也不起作用。因此,我决定编写自己的实现,尽我所能模仿它,这就是我所制作的。</p>
<p>我已经提供了我自己的图像,其中一个是一个摄像师,另一个是相同的图像,但旋转55度逆时针方向。</p>
<p>我所写的基础是我分配了一个输出RGB图像,其中行的数量是两个图像的最大值,以便将两个图像放在输出图像中,而列只是这两个列的总和。请注意,我假设这两个图像都是灰度的。</p>
<p>我将每个图像放在它们对应的点上,然后遍历所有匹配的关键点的循环。我提取两个图像之间匹配的关键点,然后提取它们的<code>(x,y)</code>坐标。我在每个检测到的位置画圆圈,然后画一条线把这些圆圈连接在一起。</p>
<p>请记住,第二幅图像中检测到的关键点与其自身坐标系有关。如果要将其放置在最终输出图像中,则需要将列坐标偏移第一个图像中的列数,以使列坐标相对于输出图像的坐标系。</p>
<p>不费吹灰之力:</p>
<pre><code>import numpy as np
import cv2
def drawMatches(img1, kp1, img2, kp2, matches):
"""
My own implementation of cv2.drawMatches as OpenCV 2.4.9
does not have this function available but it's supported in
OpenCV 3.0.0
This function takes in two images with their associated
keypoints, as well as a list of DMatch data structure (matches)
that contains which keypoints matched in which images.
An image will be produced where a montage is shown with
the first image followed by the second image beside it.
Keypoints are delineated with circles, while lines are connected
between matching keypoints.
img1,img2 - Grayscale images
kp1,kp2 - Detected list of keypoints through any of the OpenCV keypoint
detection algorithms
matches - A list of matches of corresponding keypoints through any
OpenCV keypoint matching algorithm
"""
# Create a new output image that concatenates the two images together
# (a.k.a) a montage
rows1 = img1.shape[0]
cols1 = img1.shape[1]
rows2 = img2.shape[0]
cols2 = img2.shape[1]
# Create the output image
# The rows of the output are the largest between the two images
# and the columns are simply the sum of the two together
# The intent is to make this a colour image, so make this 3 channels
out = np.zeros((max([rows1,rows2]),cols1+cols2,3), dtype='uint8')
# Place the first image to the left
out[:rows1,:cols1] = np.dstack([img1, img1, img1])
# Place the next image to the right of it
out[:rows2,cols1:] = np.dstack([img2, img2, img2])
# For each pair of points we have between both images
# draw circles, then connect a line between them
for mat in matches:
# Get the matching keypoints for each of the images
img1_idx = mat.queryIdx
img2_idx = mat.trainIdx
# x - columns
# y - rows
(x1,y1) = kp1[img1_idx].pt
(x2,y2) = kp2[img2_idx].pt
# Draw a small circle at both co-ordinates
# radius 4
# colour blue
# thickness = 1
cv2.circle(out, (int(x1),int(y1)), 4, (255, 0, 0), 1)
cv2.circle(out, (int(x2)+cols1,int(y2)), 4, (255, 0, 0), 1)
# Draw a line in between the two points
# thickness = 1
# colour blue
cv2.line(out, (int(x1),int(y1)), (int(x2)+cols1,int(y2)), (255,0,0), 1)
# Show the image
cv2.imshow('Matched Features', out)
cv2.waitKey(0)
cv2.destroyWindow('Matched Features')
# Also return the image if you'd like a copy
return out
</code></pre>
<hr/>
<p>为了说明这一点,下面是我使用的两个图像:</p>
<p><img src="https://i.stack.imgur.com/qh2Qm.png" alt="Cameraman Image"/></p>
<p><img src="https://i.stack.imgur.com/4phVl.png" alt="Rotated Cameraman Image"/></p>
<p>我使用OpenCV的ORB检测器来检测关键点,并使用标准化的Hamming距离作为相似度的距离度量,因为这是一个二进制描述符。因此:</p>
<pre><code>import numpy as np
import cv2
img1 = cv2.imread('cameraman.png', 0) # Original image - ensure grayscale
img2 = cv2.imread('cameraman_rot55.png', 0) # Rotated image - ensure grayscale
# Create ORB detector with 1000 keypoints with a scaling pyramid factor
# of 1.2
orb = cv2.ORB(1000, 1.2)
# Detect keypoints of original image
(kp1,des1) = orb.detectAndCompute(img1, None)
# Detect keypoints of rotated image
(kp2,des2) = orb.detectAndCompute(img2, None)
# Create matcher
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Do matching
matches = bf.match(des1,des2)
# Sort the matches based on distance. Least distance
# is better
matches = sorted(matches, key=lambda val: val.distance)
# Show only the top 10 matches - also save a copy for use later
out = drawMatches(img1, kp1, img2, kp2, matches[:10])
</code></pre>
<hr/>
<p>这是我得到的图像:</p>
<p><img src="https://i.stack.imgur.com/L4RUT.png" alt="Matched Features"/></p>
<hr/>
<h2>与<code>knnMatch</code>一起使用</h2>
<p>我想说明一下,只有假设匹配项出现在1D列表中,上述代码才起作用。但是,如果决定使用<code>cv2.BFMatcher</code>中的<code>knnMatch</code>方法,则返回的是列表列表列表。具体来说,给定<code>img1</code>中名为<code>des1</code>的描述符和<code>img2</code>中名为<code>des2</code>的描述符,从<code>knnMatch</code>返回的列表中的每个元素都是来自<code>des2</code>的<code>k</code>匹配项的另一个列表,它们最接近<code>des1</code>中的每个描述符。因此,来自<code>knnMatch</code>输出的第一个元素是来自<code>des2</code>的<code>k</code>匹配的列表,这些匹配最接近<code>des1</code>中找到的第一个描述符。来自<code>knnMatch</code>输出的第二个元素是来自<code>des2</code>的<code>k</code>匹配的列表,这些匹配最接近<code>des1</code>中找到的第二个描述符,依此类推。</p>
<p>为了充分理解<code>knnMatch</code>,必须将要匹配的邻居总数限制为<code>k=2</code>。原因是您希望使用至少两个匹配点来验证匹配的质量,如果质量足够好,您将希望使用这些点绘制匹配并在屏幕上显示它们。您可以使用一个非常简单的比率测试(credit转到<a href="http://www.cs.ubc.ca/~lowe/home.html" rel="noreferrer">David Lowe</a>)来确保从<code>des2</code>到<code>des1</code>中的描述符的第一个匹配点的距离与从<code>des2</code>到第二个匹配点的距离相距一定距离。因此,要将从<code>knnMatch</code>返回的内容转换为上面编写的代码所需的内容,请遍历匹配项,使用上面的比率测试并检查它是否通过。如果是,则将第一个匹配的关键点添加到新列表中。</p>
<p>假设您像声明<code>BFMatcher</code>实例之前那样创建了所有变量,那么现在您可以这样做,以使<code>knnMatch</code>方法适合使用<code>drawMatches</code>:</p>
<pre><code># Create matcher
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Perform KNN matching
matches = bf.knnMatch(des1, des2, k=2)
# Apply ratio test
good = []
for m,n in matches:
if m.distance < 0.75*n.distance:
# Add first matched keypoint to list
# if ratio test passes
good.append(m)
# Or do a list comprehension
#good = [m for (m,n) in matches if m.distance < 0.75*n.distance]
# Now perform drawMatches
out = drawMatches(img1, kp1, img2, kp2, good)
</code></pre>
<p>我想将上述修改归到用户<a href="https://stackoverflow.com/users/4355475/ryanmeasel">@ryanmeasel</a>身上,找到这些修改的答案是在他的文章<a href="https://stackoverflow.com/questions/20172953/opencv-python-no-drawmatchesknn-function/35615024#35615024">OpenCV Python : No drawMatchesknn function</a>。</p>