特征描述符用来表述图像的特征,通过匹配特征描述符来匹配图像的特征。
OpenCV官方教程
基本流程
- 初始化匹配器
- 调用匹配器的match方法得到DMatch对象
- drawMatches方法画出匹配结果
The result of
matches = algorithm.match(descriptor1,descriptor2)
is a list of DMatch objects. This DMatch object has the following attributes:
DMatch.distance - Distance between descriptors. The lower, the better it is.
DMatch.trainIdx - Index of the descriptor in train descriptors
DMatch.queryIdx - Index of the descriptor in query descriptors
DMatch.imgIdx - Index of the train image.
descriptor1对应的图像是queryImage
descriptor2对应的图像是trainImage
cv2.drawMatches(img1, keypoints1, img2, keypoints2, matches1to2[, outImg[, matchColor[, singlePointColor[, matchesMask[, flags]]]]]) → outImg
"""
Parameters:
img1 – First source image.
keypoints1 – Keypoints from the first source image.
img2 – Second source image.
keypoints2 – Keypoints from the second source image.
matches1to2 – Matches from the first image to the second one, which means that keypoints1[i] has a corresponding point in keypoints2[matches[i]] .
outImg – Output image. Its content depends on the flags value defining what is drawn in the output image. See possible flags bit values below.
matchColor – Color of matches (lines and connected keypoints). If matchColor==Scalar::all(-1) , the color is generated randomly.
singlePointColor – Color of single keypoints (circles), which means that keypoints do not have the matches. If singlePointColor==Scalar::all(-1) , the color is generated randomly.
matchesMask – Mask determining which matches are drawn. If the mask is empty, all matches are drawn.
flags – Flags setting drawing features. Possible flags bit values are defined by DrawMatchesFlags.
"""
struct DrawMatchesFlags
{
enum
{
DEFAULT = 0, // Output image matrix will be created (Mat::create),
// i.e. existing memory of output image may be reused.
// Two source images, matches, and single keypoints
// will be drawn.
// For each keypoint, only the center point will be
// drawn (without a circle around the keypoint with the
// keypoint size and orientation).
DRAW_OVER_OUTIMG = 1, // Output image matrix will not be
// created (using Mat::create). Matches will be drawn
// on existing content of output image.
NOT_DRAW_SINGLE_POINTS = 2, // Single keypoints will not be drawn.
DRAW_RICH_KEYPOINTS = 4 // For each keypoint, the circle around
// keypoint with keypoint size and orientation will
// be drawn.
};
};
Brute-Force匹配(暴力匹配)
用暴力方法找到点集一中每个descriptor在点集二中距离最近的 descriptor。
bf = cv2.BFMatcher(normType=cv2.NORM_HAMMING, crossCheck=True)
'''
. @param normType One of NORM_L1, NORM_L2, NORM_HAMMING, NORM_HAMMING2. L1 and L2 norms are
. preferable choices for SIFT and SURF descriptors, NORM_HAMMING should be used with ORB, BRISK and
. BRIEF, NORM_HAMMING2 should be used with ORB when WTA_K==3 or 4 (see ORB::ORB constructor
. description).
. @param crossCheck If it is false, this is will be default BFMatcher behaviour when it finds the k
. nearest neighbors for each query descriptor. If crossCheck==true, then the knnMatch() method with
. k=1 will only return pairs (i,j) such that for i-th query descriptor the j-th descriptor in the
. matcher's collection is the nearest and vice versa, i.e. the BFMatcher will only return consistent
. pairs. Such technique usually produces best results with minimal number of outliers when there are
. enough matches. This is alternative to the ratio test, used by D. Lowe in SIFT paper.
matching结果包含许多错误匹配,错误的匹配分为两种:
False-positive matches: 将非对应特征点检测为匹配(我们可以对他做文章,尽量消除它)
False-negative matches: 未将匹配的特征点检测出来(无法处理,因为matching算法拒绝)
crossCheck(交叉验证)是消除False-positive matches的一种方式
另外一种方式是ratio,knn就是ratio的一种。使用knnMatch的时候,crossCheck要设置成False
'''
matches = bf.match(des1, des2)
# match 简单查找最优匹配
matches = sorted(matcheds, key = lambda x:x.distance)
Knn筛选匹配点
bf = cv2.BFMatcher(normType=cv2.NORM_HAMMING, crossCheck=False)
matches = bf.knnMatch(des1, des2, k=2)
# 为每个descriptor查找K-nearest-matches,k用来指定匹配的数量
img = cv2.drawMatchesKnn(img1, kp1, img2, kp2, matches, None, flags=2)
FLANN匹配
使用快速近似最近邻搜索算法寻找,适合在大量数据中查找匹配图像。
FlannBasedMatcher为descriptor建立索引树,这种操作将在匹配大量数据时发挥巨大作用(比如在上百幅图像的数据集中查找匹配图像)。而Brute-force matcher在这个过程并不进行操作,它只是将train descriptors保存在内存中。
Flann匹配的速度远远快于Blute-Force,大约是10倍。
# FLANN parameters
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks=50) # or pass empty dictionary
flann = cv2.FlannBasedMatcher(index_params,search_params)
matches = flann.knnMatch(des1,des2,k=2)
# Need to draw only good matches, so create a mask
matchesMask = [[0,0] for i in range(len(matches))]
# ratio test as per Lowe's paper
for i,(m,n) in enumerate(matches):
if m.distance < 0.7*n.distance:
matchesMask[i]=[1,0]
draw_params = dict(matchColor = (0,255,0), singlePointColor = (255,0,0), matchesMask = matchesMask, flags = 0)
img = cv2.drawMatchesKnn(img1,kp1,img2,kp2,matches,None,**draw_params)
实例
BFMatcher
import cv2
from matplotlib import pyplot as plt
img1 = cv2.imread("train.jpg", 0)
img2 = cv2.imread("test.jpg", 0)
orb = cv2.ORB_create()
keypoints1, des1 = orb.detectAndCompute(img1, None)
keypoints2, des2 = orb.detectAndCompute(img2, None)
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
matches = bf.match(des1, des2)
matches = sorted(matches, key=lambda x: x.distance)
img3 = cv2.drawMatches(img1, keypoints1, img2, keypoints2, matches[:20], None, flags=2)
plt.imshow(img3)
plt.show()

KNN Matcher
import cv2
from matplotlib import pyplot as plt
img1 = cv2.imread("train.jpg", 0)
img2 = cv2.imread("test.jpg", 0)
orb = cv2.ORB_create()
keypoints1, des1 = orb.detectAndCompute(img1, None)
keypoints2, des2 = orb.detectAndCompute(img2, None)
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=False)
matches = bf.knnMatch(des1, des2, k=2)
img3 = cv2.drawMatchesKnn(img1, keypoints1, img2, keypoints2, matches[:30], None, flags=2)
plt.imshow(img3)

FLANN
import cv2
from matplotlib import pyplot as plt
img1 = cv2.imread("train.jpg", 0)
img2 = cv2.imread("test.jpg", 0)
sift = cv2.xfeatures2d.SIFT_create()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1, None)
kp2, des2 = sift.detectAndCompute(img2, None)
# FLANN parameters
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
search_params = dict(checks=50) # or pass empty dictionary
flann = cv2.FlannBasedMatcher(index_params, search_params)
matches = flann.knnMatch(des1, des2, k=2)
# Need to draw only good matches, so create a mask
matchesMask = [[0, 0] for i in range(len(matches))]
# ratio test as per Lowe's paper
for i, (m, n) in enumerate(matches):
if m.distance < 0.5 * n.distance:
matchesMask[i] = [1, 0]
draw_params = dict(matchColor=(0, 255, 0),
singlePointColor=(255, 0, 0),
matchesMask=matchesMask,
flags=0)
img3 = cv2.drawMatchesKnn(img1, kp1, img2, kp2, matches, None, **draw_params)
plt.imshow(img3, ), plt.show()

Trouble Shoot
- error: (-215) (type == CV_8U && dtype == CV_32S) || dtype == CV_32F in function cv::batchDistance
http://answers.opencv.org/question/10046/feature-2d-feature-matching-fails-with-assert-statcpp/
网友评论