findcontours matessentialmat和findcontours matfundamentalmat的区别

1705人阅读
OpenCV(64)
使用cv::findFundamentalMat要注意的几点
转自.cn/s/blog_13w9a.html
&&&&在新版的OpenCV中,很多C代码都被重新封装成了C++代码,相应的其调用接口也发生了改变,而文档中的叙述也越来越不清楚,往往导致使用过程中出现各种问题。
&&&&在处理立体图像对的时候经常会用到对极几何的知识,计算基础矩阵也是很常见的事。OpenCV实现了基本矩阵的算法。对于老版本的C代码,计算基本矩阵的RANSAC方法中有一个迭代次数不收敛的bug,可能导致每次计算的采样次数都达到最大限制的次数,其根源在于计算采样可信度的公式有误,新版本的代码还没有仔细看过,不敢确定是否已经修正了这个bug。但是这个bug并不会对最后的结果造成多大影响,只是会影响计算的效率——原本几次采样即可结束迭代,在这个bug的影响下可能要采样数百次。
&&&&新版本的计算基本矩阵的函数的使用也有一些问题,下面来看看cv::findFundamentalMat函数:
//! the algorithm for finding fundamental matrix
&&&&FM_7POINT = CV_FM_7POINT, //!& 7-point algorithm
&&&&FM_8POINT = CV_FM_8POINT, //!& 8-point algorithm
&&&&FM_LMEDS = CV_FM_LMEDS,&&//!& least-median algorithm
&&&&FM_RANSAC = CV_FM_RANSAC&&//!& RANSAC algorithm
//! finds fundamental matrix from a set of corresponding 2D points
CV_EXPORTS Mat findFundamentalMat( const Mat& points1, const Mat& points2,
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&CV_OUT vector&uchar&& mask, int method=FM_RANSAC,
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&double param1=3., double param2=0.99 );
//! finds fundamental matrix from a set of corresponding 2D points
CV_EXPORTS_W Mat findFundamentalMat( const Mat& points1, const Mat& points2,
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&int method=FM_RANSAC,
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&double param1=3., double param2=0.99 );
&&&&上面是OpenCV计算基本矩阵的C++接口,其内部实现仍然是调用的C代码函数,仅仅是在上层进行了一次封装。网上有些文档中说const Mat& points1和points2这两个参数可以直接由vector&Point2f&类型作为传入参数,其实这是错误的。直接用vector&Point2f&传递,在编译时可能不会报错,但是运行时会直接崩溃。因为cv::Mat的构造函数并不会把Point2f转化为两个浮点数存于Mat的两个元素中,而是仍然以Point2f存于Mat的一个元素中,于是findFundamentalMat一读取这个Mat就会出错。
&&&&因此我们最好老老实实地去构建Mat points1和Mat points2。该矩阵的维度可以是2xN,也可以是Nx2的,其中N是特征点的数目。另一个要注意的地方就是该矩阵的类型不能是CV_64F,也就是说findFundamentalMat内部解析points1和points2时是按照float类型去解析的,设为CV_64F将导致读取数据失败,程序崩溃。最好设为CV_32F。下面是从匹配好的特征点计算F的代码示例:
// vector&KeyPoint& m_LeftK
//&vector&KeyPoint& m_RightK
// vector&DMatch& m_M
// 以上三个变量已经被计算出来,分别是提取的关键点及其匹配,下面直接计算F
// 分配空间
int ptCount = (int)m_Matches.size();
Mat p1(ptCount, 2, CV_32F);
Mat p2(ptCount, 2, CV_32F);
// 把Keypoint转换为Mat
for (int i=0; i&ptC i++)
&&&&&pt = m_LeftKey[m_Matches[i].queryIdx].
&&&&&p1.at&float&(i, 0) = pt.x;
&&&&&p1.at&float&(i, 1) = pt.y;
&&&&&pt = m_RightKey[m_Matches[i].trainIdx].
&&&&&p2.at&float&(i, 0) = pt.x;
&&&&&p2.at&float&(i, 1) = pt.y;
// 用RANSAC方法计算F
// Mat m_F
// 上面这个变量是基本矩阵
// vector&uchar& m_RANSACS
// 上面这个变量已经定义过,用于存储RANSAC后每个点的状态
m_Fundamental = findFundamentalMat(p1, p2, m_RANSACStatus, FM_RANSAC);
// 计算野点个数
int OutlinerCount = 0;
for (int i=0; i&ptC i++)
&&&&&if (m_RANSACStatus[i] == 0) // 状态为0表示野点
&&&&&&&&&&OutlinerCount++;
// 计算内点
// vector&Point2f& m_LeftI
//&vector&Point2f& m_RightI
//&vector&DMatch& m_InlierM
// 上面三个变量用于保存内点和匹配关系
int InlinerCount = ptCount - OutlinerC
m_InlierMatches.resize(InlinerCount);
m_LeftInlier.resize(InlinerCount);
m_RightInlier.resize(InlinerCount);
InlinerCount = 0;
for (int i=0; i&ptC i++)
&&&&&if (m_RANSACStatus[i] != 0)
&&&&&&&&&&m_LeftInlier[InlinerCount].x = p1.at&float&(i, 0);
&&&&&&&&&&m_LeftInlier[InlinerCount].y = p1.at&float&(i, 1);
&&&&&&&&&&m_RightInlier[InlinerCount].x = p2.at&float&(i, 0);
&&&&&&&&&&m_RightInlier[InlinerCount].y = p2.at&float&(i, 1);
&&&&&&&&&&m_InlierMatches[InlinerCount].queryIdx = InlinerC
&&&&&&&&&&m_InlierMatches[InlinerCount].trainIdx = InlinerC
&&&&&&&&&&InlinerCount++;
// 把内点转换为drawMatches可以使用的格式
vector&KeyPoint& key1(InlinerCount);
vector&KeyPoint& key2(InlinerCount);
KeyPoint::convert(m_LeftInlier, key1);
KeyPoint::convert(m_RightInlier, key2);
// 显示计算F过后的内点匹配
// Mat m_matLeftI
// Mat m_matRightI
// 以上两个变量保存的是左右两幅图像
drawMatches(m_matLeftImage, key1, m_matRightImage, key2, m_InlierMatches, OutImage);
cvNamedWindow( &Match features&, 1);
cvShowImage(&Match features&, &(IplImage(OutImage)));
cvWaitKey( 0 );
cvDestroyWindow( &Match features& );
&&&&以上就是核心代码。载入左右两幅图像后,先用一文中所述方法提取特征点并匹配,然后用上面的程序计算基本矩阵,并显示内点匹配。如图:
初始匹配:
RANSAC过后的内点匹配:
&&相关文章推荐
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:120661次
积分:1529
积分:1529
排名:千里之外
转载:146篇
评论:29条
(1)(8)(9)(18)(6)(30)(1)(2)(12)(1)(2)(5)(3)(6)(29)(14)(3)(1)Stay logged in
findEssentialMat and recoverPose do not work with normalised coordinates as expected (Bug #4474)
Status:Cancelled
Priority:Normal
Assignee:-
Category:calibration, 3d
Target version:
Affected version:branch 'master' (3.0-dev)
Operating System:Mac OSX
Difficulty:
HW Platform:x86
Pull request:
Description
I have asked this question , but I feel like this is a lack of documentation or a bug. The summary is this:
When attempting to recover the pose between two cameras, one might do the following (undistorting corresponding image points, obtaining normalised coordinates and then computing and decomposing the essential matrix).
M // inlier mask
undistortPoints(imgpts1, imgpts1, K, dist_coefficients);
undistortPoints(imgpts2, imgpts2, K, dist_coefficients);
Mat E = findEssentialMat(imgpts1, imgpts2, 1.0, Point2d(0,0), RANSAC, 0.999, 3, mask); // coords are normalised, so pass f=1, (cx,cy)=(0,0) here
correctMatches(E, imgpts1, imgpts2, imgpts1, imgpts2);
recoverPose(E, imgpts1, imgpts2, R, t, 1.0, Point2d(0,0), mask);
That is, after the call to undistortPoints, image coordinates are in range [-1,1].
However, the documentation of undistortPoints states that
New camera matrix (3x3) or new projection matrix (3x4). P1 or P2 computed by cv::stereoRectify can be passed here. If the matrix is empty, the identity new camera matrix is used.
so it is possible to pass the original camera matrix here and thus obtain undistorted but non-normalised coordinates. So it should be possible to inform findEssenntialMat and recoverPose of the fact that the coords passed in are not normalised (I've checked the source code, both function do use the parameters to normalise the coordinates). So one should be able to do this:
M // inlier mask
undistortPoints(imgpts1, imgpts1, K, dist_coefficients, noArray(), K); // original camera matrix should still be valid
undistortPoints(imgpts2, imgpts2, K, dist_coefficients, noArray(), K);
double focal = K.at&double&(0,0);
Point2d principalPoint(K.at&double&(0,2), K.at&double&(1,2));
Mat E = findEssentialMat(imgpts1, imgpts2, focal, principalPoint, RANSAC, 0.999, 3, mask); // pass real focal length and principal point
correctMatches(E, imgpts1, imgpts2, imgpts1, imgpts2);
recoverPose(E, imgpts1, imgpts2, R, t, focal, principalPoint, mask);
In my understanding, both of these should work. What I have found however, is that I need to pass the original camera matrix K to undistortPoints's last parameter, but still tell findEssentialMat and recoverPose that the coordinates are normalised even though they are not, otherwise, the results are very weird and unusable. So, this is what I currently do:
M // inlier mask
undistortPoints(imgpts1, imgpts1, K, dist_coefficients, noArray(), K); // original camera matrix should still be valid
undistortPoints(imgpts2, imgpts2, K, dist_coefficients, noArray(), K);
Mat E = findEssentialMat(imgpts1, imgpts2, 1, Point2d(0,0), RANSAC, 0.999, 3, mask); // pretend
that coordinates are normalised
correctMatches(E, imgpts1, imgpts2, imgpts1, imgpts2);
recoverPose(E, imgpts1, imgpts2, R, t, 1.0, Point2d(0,0), mask); // profit ???
I feel that there is something wrong with the documentation or the implementation of these functions.
Updated by
I have since determined by manually labelling correspondences that it does work as expected, it seems my data was simply off. However the documentation could be improved somewhat by stating how the relation of the parameters.Status changed from New to Cancelled
Also available in:}

我要回帖

更多关于 umat和mat的区别 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信