關於opencv中對齊圖片的問題?

我們現在在做一個項目,需要使用opencv對齊兩張尺寸不一樣的圖片,兩張圖片有一部分是相同的(或者十分相似),根據這部分來對齊,不知道要如何做?初學opencv,又急著用,麻煩大家解釋得詳細些,謝謝!


我曾經在本科畢設里做過圖像拼接,給你說說演算法流程。首先先放結論:

拍攝四幅圖片:

拼接結果:

具體流程:

一、相機光心不動,通過旋轉相機拍攝幾張圖片(本例中為水平旋轉)。注意相機位置不能動,動了就產生了深度,只有通過三維重建等更複雜的手段才能恢復場景。純粹的圖像拼接只能旋轉相機,不能平移。一種情況例外,就是你拍攝的物體是平面物體,比如壁畫。

二、在每一幅圖像上提取局部特徵點,最流行的是SIFT和SURF特徵點和對應的描述符:

在OpenCV中SIFT和SURF演算法都有函數。比如SIFT特徵點用SiftFeatureDetector 類提取,描述符用SiftDescriptorExtractor類進行提取。

三、匹配描述符,得到若干匹配點對:

在OpenCV中描述符匹配可以用BruteForceMatcher類獲得。

四、用RANSAC演算法去除錯誤匹配,並且得到兩幅圖之間的變換矩陣(俗稱單應矩陣)。這一步我們可以看到精鍊出的正確匹配和上一步獲得的錯誤匹配分開了。

正確匹配:

錯誤匹配:

這一步在OpenCV中是用函數findhomography獲得的,這個函數只需要輸入匹配點對的坐標,它內置執行RANSAC演算法去除錯誤匹配。

五、有了變換矩陣,就可以把一幅圖拼到另一幅圖像上,這一過程可以不斷進行,只要把不同的坐標系的坐標原點對準就行,重疊區域的顏色是兩幅圖像對應點顏色的平均值:

此處所有的像素點都是線性插值獲得的,通過逆變換到原始圖像上某一點,然後用周圍四個點顏色值插值獲得。(必須是逆變換插值!如果單純採用正變換,你會發現新圖像上有很多黑洞)在OpenCV中,給定一個變換和一幅圖像,warpPerspective函數可以獲得變換後的圖像。

六、接下來要處理的是拼接縫,可以通過定位兩幅圖像的重疊區域,然後用連續漸變的方法調整顏色加權的權重獲得:

此處好像沒有現成的函數,要自己判斷重疊區域,計算每一個點對應的權值執行融合。

七、上下方拼接縫依然存在,我的策略是:直接裁掉!

八、還存在一個問題,直接拼接往往圖像形變嚴重,這是因為拍攝的時候相機發生了旋轉,把不同旋轉角上的圖片畫到一張平面畫布上就會拉伸地很長:

解決辦法是首先根據相機焦距把圖像做一個柱面坐標轉換:把所有圖片先映射到一個柱面上,然後把柱面展平得到最終畫布,這樣所有圖片覆蓋的角度就大體相同了:

經過柱面坐標轉換後,上面的初始圖像變成了這樣:

注意直線變成了曲線,為方便把上下多餘的彎曲部分首先裁掉,然後重新執行步驟二到步驟七,就能夠完美拼接。這一步好像也沒有現成函數

現在這個拼接功能已經集成到OpenCV里了,OpenCV2.3往後的版本有了stitcher類,你輸入多幅圖像它直接出拼接結果。

最後再上一次論文里的結果,當時做的時候不知道OpenCV,用的Matlab,OpenCV的拼接速度是相當快的:


首先你需要用一些特徵檢測演算法把兩張圖片上的某些特徵點匹配出來,然後由於它們之間會相差一個變換矩陣(縮放變換+旋轉變換+平移變換),你需要把這個變換矩陣給求解出來(利用前面已經匹配好的特徵點),然後把變換應用到其中一張圖片。


1.利用sift或者surf演算法獲取特徵點。

2.利用BBF演算法快速獲取匹配點。

3.利用Ransac去除錯誤匹配點。

4.用上面Ransac的獲取的單應矩陣變換圖像,也可以再採用牛頓法再迭代獲取更好的單應矩陣。

5.然後就是用單應矩陣變換圖像。

6.最後增益補償加多帶融合拼圖。

話說我為什麼怎麼喜歡說「利用」。


基於SURF特徵的圖像與視頻拼接技術的研究和實現(一)

一直有計劃研究實時圖像拼接,但是直到最近拜讀西電2013年張亞娟的《基於SURF特徵的圖像與視頻拼接技術的研究和實現》,條理清晰、內容完整、實現的技術具有市場價值。因此定下決心以這篇論文為基礎脈絡,結合實際情況,進行「基於SURF特徵的圖像與視頻拼接技術的研究和實現」。

一、基於opencv的surf實現

3.0以後,surf被分到了"opencv_contrib-master"中去,操作起來不習慣,這裡仍然選擇一直在使用的opencv2.48,其surf的調用方式為:

// raw_surf.cpp : 本例是對opencv-2.48相關例子的實現

//

#include "stdafx.h"

#include &

#include "opencv2/core/core.hpp"

#include "opencv2/features2d/features2d.hpp"

#include "opencv2/highgui/highgui.hpp"

#include "opencv2/nonfree/features2d.hpp"

usingnamespace std;

usingnamespace cv;

int main( int argc, char** argv )

{

Mat img_1 = imread( "img_opencv_1.png", 0 );

Mat img_2 = imread( "img_opencv_2.png", 0 );

if( !img_1.data || !img_2.data )

{ std::cout&<&< " --(!) Error reading images " &<&< std::endl; return -1; }

//-- Step 1: Detect the keypoints using SURF Detector

int minHessian = 10000;

SurfFeatureDetector detector( minHessian );

std::vector& keypoints_1, keypoints_2;

detector.detect( img_1, keypoints_1 );

detector.detect( img_2, keypoints_2 );

//-- Draw keypoints

Mat img_keypoints_1; Mat img_keypoints_2;

drawKeypoints( img_1, keypoints_1, img_keypoints_1, Scalar::all(-1), DrawMatchesFlags::DEFAULT );

drawKeypoints( img_2, keypoints_2, img_keypoints_2, Scalar::all(-1), DrawMatchesFlags::DEFAULT );

//-- Step 2: Calculate descriptors (feature vectors)

SurfDescriptorExtractor extractor;

Mat descriptors_1, descriptors_2;

extractor.compute( img_1, keypoints_1, descriptors_1 );

extractor.compute( img_2, keypoints_2, descriptors_2 );

//-- Step 3: Matching descriptor vectors with a brute force matcher

BFMatcher matcher(NORM_L2);

std::vector&< DMatch &> matches;

matcher.match( descriptors_1, descriptors_2, matches );

//-- Draw matches

Mat img_matches;

drawMatches( img_1, keypoints_1, img_2, keypoints_2, matches, img_matches );

//-- Show detected (drawn) keypoints

imshow("Keypoints 1", img_keypoints_1 );

imshow("Keypoints 2", img_keypoints_2 );

//-- Show detected matches

imshow("Matches", img_matches );

waitKey(0);

return 0;

}

這裡採用的是surffeaturedector的方法進行點的尋找,而後採用BFMatcher的方法進行數據比對。但這種方法錯誤的比較多,提供了FLANN的方法進行比對:

// raw_surf.cpp : 本例是對opencv-2.48相關例子的實現

//

#include "stdafx.h"

#include &

#include "opencv2/core/core.hpp"

#include "opencv2/features2d/features2d.hpp"

#include "opencv2/highgui/highgui.hpp"

#include "opencv2/nonfree/features2d.hpp"

usingnamespace std;

usingnamespace cv;

int main( int argc, char** argv )

{

Mat img_1 = imread( "img_opencv_1.png", 0 );

Mat img_2 = imread( "img_opencv_2.png", 0 );

if( !img_1.data || !img_2.data )

{ std::cout&<&< " --(!) Error reading images " &<&< std::endl; return -1; }

//-- Step 1: Detect the keypoints using SURF Detector

int minHessian = 400;

SurfFeatureDetector detector( minHessian );

std::vector& keypoints_1, keypoints_2;

detector.detect( img_1, keypoints_1 );

detector.detect( img_2, keypoints_2 );

//-- Draw keypoints

Mat img_keypoints_1; Mat img_keypoints_2;

drawKeypoints( img_1, keypoints_1, img_keypoints_1, Scalar::all(-1), DrawMatchesFlags::DEFAULT );

drawKeypoints( img_2, keypoints_2, img_keypoints_2, Scalar::all(-1), DrawMatchesFlags::DEFAULT );

//-- Step 2: Calculate descriptors (feature vectors)

SurfDescriptorExtractor extractor;

Mat descriptors_1, descriptors_2;

extractor.compute( img_1, keypoints_1, descriptors_1 );

extractor.compute( img_2, keypoints_2, descriptors_2 );

//-- Step 3: Matching descriptor vectors using FLANN matcher

FlannBasedMatcher matcher;

std::vector&< DMatch &> matches;

matcher.match( descriptors_1, descriptors_2, matches );

double max_dist = 0; double min_dist = 100;

//-- Quick calculation of max and min distances between keypoints

for( int i = 0; i &< descriptors_1.rows; i++ )

{ double dist = matches[i].distance;

if( dist &< min_dist ) min_dist = dist;

if( dist &> max_dist ) max_dist = dist;

}

printf("-- Max dist : %f
", max_dist );

printf("-- Min dist : %f
", min_dist );

//-- Draw only "good" matches (i.e. whose distance is less than 2*min_dist,

//-- or a small arbitary value ( 0.02 ) in the event that min_dist is very

//-- small)

//-- PS.- radiusMatch can also be used here.

std::vector&< DMatch &> good_matches;

for( int i = 0; i &< descriptors_1.rows; i++ )

{ if( matches[i].distance &<= max(2*min_dist, 0.02) )

{ good_matches.push_back( matches[i]); }

}

//-- Draw only "good" matches

Mat img_matches;

drawMatches( img_1, keypoints_1, img_2, keypoints_2,

good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),

vector&<char&>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );

//-- Show detected matches

imshow( "Good Matches", img_matches );

for( int i = 0; i &< (int)good_matches.size(); i++ )

{ printf( "-- Good Match [%d] Keypoint 1: %d -- Keypoint 2: %d
", i, good_matches[i].queryIdx, good_matches[i].trainIdx ); }

waitKey(0);

return 0;

}

可以發現,除了錯誤一例,其他都是正確的。

繼續來做,計算出單應矩陣

// raw_surf.cpp : 本例是對opencv-2.48相關例子的實現

//

#include "stdafx.h"

#include &

#include "opencv2/core/core.hpp"

#include "opencv2/features2d/features2d.hpp"

#include "opencv2/highgui/highgui.hpp"

#include "opencv2/nonfree/features2d.hpp"

#include "opencv2/calib3d/calib3d.hpp"

usingnamespace std;

usingnamespace cv;

int main( int argc, char** argv )

{

Mat img_1 = imread( "img_opencv_1.png", 0 );

Mat img_2 = imread( "img_opencv_2.png", 0 );

if( !img_1.data || !img_2.data )

{ std::cout&<&< " --(!) Error reading images " &<&< std::endl; return -1; }

//-- Step 1: Detect the keypoints using SURF Detector

int minHessian = 400;

SurfFeatureDetector detector( minHessian );

std::vector& keypoints_1, keypoints_2;

detector.detect( img_1, keypoints_1 );

detector.detect( img_2, keypoints_2 );

//-- Draw keypoints

Mat img_keypoints_1; Mat img_keypoints_2;

drawKeypoints( img_1, keypoints_1, img_keypoints_1, Scalar::all(-1), DrawMatchesFlags::DEFAULT );

drawKeypoints( img_2, keypoints_2, img_keypoints_2, Scalar::all(-1), DrawMatchesFlags::DEFAULT );

//-- Step 2: Calculate descriptors (feature vectors)

SurfDescriptorExtractor extractor;

Mat descriptors_1, descriptors_2;

extractor.compute( img_1, keypoints_1, descriptors_1 );

extractor.compute( img_2, keypoints_2, descriptors_2 );

//-- Step 3: Matching descriptor vectors using FLANN matcher

FlannBasedMatcher matcher;

std::vector&< DMatch &> matches;

matcher.match( descriptors_1, descriptors_2, matches );

double max_dist = 0; double min_dist = 100;

//-- Quick calculation of max and min distances between keypoints

for( int i = 0; i &< descriptors_1.rows; i++ )

{ double dist = matches[i].distance;

if( dist &< min_dist ) min_dist = dist;

if( dist &> max_dist ) max_dist = dist;

}

printf("-- Max dist : %f
", max_dist );

printf("-- Min dist : %f
", min_dist );

//-- Draw only "good" matches (i.e. whose distance is less than 2*min_dist,

//-- or a small arbitary value ( 0.02 ) in the event that min_dist is very

//-- small)

//-- PS.- radiusMatch can also be used here.

std::vector&< DMatch &> good_matches;

for( int i = 0; i &< descriptors_1.rows; i++ )

{ if( matches[i].distance &<= /*max(2*min_dist, 0.02)*/3*min_dist )

{ good_matches.push_back( matches[i]); }

}

//-- Draw only "good" matches

Mat img_matches;

drawMatches( img_1, keypoints_1, img_2, keypoints_2,

good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),

vector&<char&>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );

//-- Localize the object from img_1 in img_2

std::vector& obj;

std::vector& scene;

for( int i = 0; i &< (int)good_matches.size(); i++ )

{

obj.push_back( keypoints_1[ good_matches[i].queryIdx ].pt );

scene.push_back( keypoints_2[ good_matches[i].trainIdx ].pt );

printf( "-- Good Match [%d] Keypoint 1: %d -- Keypoint 2: %d
", i, good_matches[i].queryIdx, good_matches[i].trainIdx );

}

//直接調用ransac

Mat H = findHomography( obj, scene, CV_RANSAC );

//-- Get the corners from the image_1 ( the object to be "detected" )

std::vector& obj_corners(4);

obj_corners[0] = Point(0,0); obj_corners[1] = Point( img_1.cols, 0 );

obj_corners[2] = Point( img_1.cols, img_1.rows ); obj_corners[3] = Point( 0, img_1.rows );

std::vector& scene_corners(4);

perspectiveTransform( obj_corners, scene_corners, H);

//-- Draw lines between the corners (the mapped object in the scene - image_2 )

Point2f offset( (float)img_1.cols, 0);

line( img_matches, scene_corners[0] + offset, scene_corners[1] + offset, Scalar(0, 255, 0), 4 );

line( img_matches, scene_corners[1] + offset, scene_corners[2] + offset, Scalar( 0, 255, 0), 4 );

line( img_matches, scene_corners[2] + offset, scene_corners[3] + offset, Scalar( 0, 255, 0), 4 );

line( img_matches, scene_corners[3] + offset, scene_corners[0] + offset, Scalar( 0, 255, 0), 4 );

//-- Show detected matches

imshow( "Good Matches Object detection", img_matches );

waitKey(0);

return 0;

}

簡化後和注釋後的版本

// raw_surf.cpp : 本例是對opencv-2.48相關例子的實現

//

#include "stdafx.h"

#include &

#include "opencv2/core/core.hpp"

#include "opencv2/features2d/features2d.hpp"

#include "opencv2/highgui/highgui.hpp"

#include "opencv2/nonfree/features2d.hpp"

#include "opencv2/calib3d/calib3d.hpp"

using namespace std;

using namespace cv;

int main( int argc, char** argv )

{

Mat img_1 = imread( "img_opencv_1.png", 0 );

Mat img_2 = imread( "img_opencv_2.png", 0 );

if( !img_1.data || !img_2.data )

{ std::cout&<&< " --(!) Error reading images " &<&< std::endl; return -1; }

//-- Step 1: 使用SURF識別出特徵點

int minHessian = 400;

SurfFeatureDetector detector( minHessian );

std::vector& keypoints_1, keypoints_2;

detector.detect( img_1, keypoints_1 );

detector.detect( img_2, keypoints_2 );

//-- Step 2: 描述SURF特徵

SurfDescriptorExtractor extractor;

Mat descriptors_1, descriptors_2;

extractor.compute( img_1, keypoints_1, descriptors_1 );

extractor.compute( img_2, keypoints_2, descriptors_2 );

//-- Step 3: 匹配

FlannBasedMatcher matcher;//BFMatcher為強制匹配

std::vector&< DMatch &> matches;

matcher.match( descriptors_1, descriptors_2, matches );

//取最大最小距離

double max_dist = 0; double min_dist = 100;

for( int i = 0; i &< descriptors_1.rows; i++ )

{

double dist = matches[i].distance;

if( dist &< min_dist ) min_dist = dist;

if( dist &> max_dist ) max_dist = dist;

}

std::vector&< DMatch &> good_matches;

for( int i = 0; i &< descriptors_1.rows; i++ )

{

if( matches[i].distance &<= 3*min_dist )//這裡的閾值選擇了3倍的min_dist

{

good_matches.push_back( matches[i]);

}

}

//畫出"good match"

Mat img_matches;

drawMatches( img_1, keypoints_1, img_2, keypoints_2,

good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),

vector&(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );

//-- Localize the object from img_1 in img_2

std::vector& obj;

std::vector& scene;

for( int i = 0; i &< (int)good_matches.size(); i++ )

{

obj.push_back( keypoints_1[ good_matches[i].queryIdx ].pt );

scene.push_back( keypoints_2[ good_matches[i].trainIdx ].pt );

}

//直接調用ransac,計算單應矩陣

Mat H = findHomography( obj, scene, CV_RANSAC );

//-- Get the corners from the image_1 ( the object to be "detected" )

std::vector& obj_corners(4);

obj_corners[0] = Point(0,0);

obj_corners[1] = Point( img_1.cols, 0 );

obj_corners[2] = Point( img_1.cols, img_1.rows );

obj_corners[3] = Point( 0, img_1.rows );

std::vector& scene_corners(4);

perspectiveTransform( obj_corners, scene_corners, H);

//-- Draw lines between the corners (the mapped object in the scene - image_2 )

Point2f offset( (float)img_1.cols, 0);

line( img_matches, scene_corners[0] + offset, scene_corners[1] + offset, Scalar(0, 255, 0), 4 );

line( img_matches, scene_corners[1] + offset, scene_corners[2] + offset, Scalar( 0, 255, 0), 4 );

line( img_matches, scene_corners[2] + offset, scene_corners[3] + offset, Scalar( 0, 255, 0), 4 );

line( img_matches, scene_corners[3] + offset, scene_corners[0] + offset, Scalar( 0, 255, 0), 4 );

//-- Show detected matches

imshow( "Good Matches Object detection", img_matches );

waitKey(0);

return 0;

}

這裡有兩點需要注意,一個是除了FlannBasedMatcher之外,還有一種mathcer叫做BFMatcher,後者為強制匹配.

此外計算所謂GOODFEATURE的時候,採用了 3*min_dist的方法,我認為這裡和論文中指出的「誤差閾值設為3」是一致的,如果理解錯誤請指出,感謝!

同時測試了航拍圖片和連鑄圖片,航拍圖片是自然圖片,特徵豐富;

連鑄圖片由於表面干擾大於原始紋理,無法得到單應矩陣

最後,添加計算RANSAC內點外點的相關代碼,這裡以3作為分界線

// raw_surf.cpp : 本例是對opencv-2.48相關例子的實現

//

#include "stdafx.h"

#include &

#include "opencv2/core/core.hpp"

#include "opencv2/imgproc/imgproc.hpp"

#include "opencv2/features2d/features2d.hpp"

#include "opencv2/highgui/highgui.hpp"

#include "opencv2/nonfree/features2d.hpp"

#include "opencv2/calib3d/calib3d.hpp"

usingnamespace std;

usingnamespace cv;

//獲得兩個pointf之間的距離

float fDistance(Point2f p1,Point2f p2)

{

float ftmp = (p1.x-p2.x)*(p1.x-p2.x) + (p1.y-p2.y)*(p1.y-p2.y);

ftmp = sqrt((float)ftmp);

return ftmp;

}

int main( int argc, char** argv )

{

Mat img_1 = imread( "img_opencv_1.png", 0 );

Mat img_2 = imread( "img_opencv_2.png", 0 );

////添加於連鑄圖像

//img_1 = img_1(Rect(20,0,img_1.cols-40,img_1.rows));

//img_2 = img_2(Rect(20,0,img_1.cols-40,img_1.rows));

// cv::Canny(img_1,img_1,100,200);

// cv::Canny(img_2,img_2,100,200);

if( !img_1.data || !img_2.data )

{ std::cout&<&< " --(!) Error reading images " &<&< std::endl; return -1; }

//-- Step 1: 使用SURF識別出特徵點

int minHessian = 400;

SurfFeatureDetector detector( minHessian );

std::vector& keypoints_1, keypoints_2;

detector.detect( img_1, keypoints_1 );

detector.detect( img_2, keypoints_2 );

//-- Step 2: 描述SURF特徵

SurfDescriptorExtractor extractor;

Mat descriptors_1, descriptors_2;

extractor.compute( img_1, keypoints_1, descriptors_1 );

extractor.compute( img_2, keypoints_2, descriptors_2 );

//-- Step 3: 匹配

FlannBasedMatcher matcher;//BFMatcher為強制匹配

std::vector&< DMatch &> matches;

matcher.match( descriptors_1, descriptors_2, matches );

//取最大最小距離

double max_dist = 0; double min_dist = 100;

for( int i = 0; i &< descriptors_1.rows; i++ )

{

double dist = matches[i].distance;

if( dist &< min_dist ) min_dist = dist;

if( dist &> max_dist ) max_dist = dist;

}

std::vector&< DMatch &> good_matches;

for( int i = 0; i &< descriptors_1.rows; i++ )

{

if( matches[i].distance &<= 3*min_dist )//這裡的閾值選擇了3倍的min_dist

{

good_matches.push_back( matches[i]);

}

}

//畫出"good match"

Mat img_matches;

drawMatches( img_1, keypoints_1, img_2, keypoints_2,

good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),

vector&<char&>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );

//-- Localize the object from img_1 in img_2

std::vector& obj;

std::vector& scene;

for( int i = 0; i &< (int)good_matches.size(); i++ )

{

obj.push_back( keypoints_1[ good_matches[i].queryIdx ].pt );

scene.push_back( keypoints_2[ good_matches[i].trainIdx ].pt );

}

//直接調用ransac,計算單應矩陣

Mat H = findHomography( obj, scene, CV_RANSAC );

//-- Get the corners from the image_1 ( the object to be "detected" )

std::vector& obj_corners(4);

obj_corners[0] = Point(0,0);

obj_corners[1] = Point( img_1.cols, 0 );

obj_corners[2] = Point( img_1.cols, img_1.rows );

obj_corners[3] = Point( 0, img_1.rows );

std::vector& scene_corners(4);

perspectiveTransform( obj_corners, scene_corners, H);

//計算內點外點

std::vector& scene_test(obj.size());

perspectiveTransform(obj,scene_test,H);

for (int i=0;i& {

printf("%d is %f
",i+1,fDistance(scene[i],scene_test[i]));

}

//-- Draw lines between the corners (the mapped object in the scene - image_2 )

Point2f offset( (float)img_1.cols, 0);

line( img_matches, scene_corners[0] + offset, scene_corners[1] + offset, Scalar(0, 255, 0), 4 );

line( img_matches, scene_corners[1] + offset, scene_corners[2] + offset, Scalar( 0, 255, 0), 4 );

line( img_matches, scene_corners[2] + offset, scene_corners[3] + offset, Scalar( 0, 255, 0), 4 );

line( img_matches, scene_corners[3] + offset, scene_corners[0] + offset, Scalar( 0, 255, 0), 4 );

//-- Show detected matches

imshow( "Good Matches Object detection", img_matches );

waitKey(0);

return 0;

}

結果顯示

其中,有誤差的點就很明顯了。

小結一下,這裡實現了使用opencv得到兩幅圖像之間的單應矩陣的方法。不是所有的圖像都能夠獲得單應矩陣的,必須是兩幅本身就有關係的圖片才可以;而且最好是自然圖像,像生產線上的這種圖像,其拼接就需要採用其他方法。

二、拼接和融合

由於之前已經計算出了「單應矩陣」,所以這裡直接利用這個矩陣就好。需要注意的一點是理清楚「幀」和拼接圖像之間的關係。一般來說,我們採用的是「柱面坐標」或平面坐標。書中採用的是若干圖像在水平方向上基本上是一字排開,是平面坐標。那麼,如果按照文中的「幀到拼接圖像」的方法,我們認為圖像拼接的順序就是由左到右,一幅一幅地計算誤差,而後進行疊加。

為了方便說明演算法,採用了《學習opencv》中提供的教堂圖像

其結果就是經過surf匹配,而將右邊的圖像形變成為適合疊加的狀態。

基於此,進行圖像對準

// raw_surf.cpp : 本例是對opencv-2.48相關例子的實現

//

#include "stdafx.h"

#include &

#include "opencv2/core/core.hpp"

#include "opencv2/imgproc/imgproc.hpp"

#include "opencv2/features2d/features2d.hpp"

#include "opencv2/highgui/highgui.hpp"

#include "opencv2/nonfree/features2d.hpp"

#include "opencv2/calib3d/calib3d.hpp"

usingnamespace std;

usingnamespace cv;

int main( int argc, char** argv )

{

Mat img_1 ;

Mat img_2 ;

Mat img_raw_1 = imread("c1.bmp");

Mat img_raw_2 = imread("c3.bmp");

cvtColor(img_raw_1,img_1,CV_BGR2GRAY);

cvtColor(img_raw_2,img_2,CV_BGR2GRAY);

//-- Step 1: 使用SURF識別出特徵點

int minHessian = 400;

SurfFeatureDetector detector( minHessian );

std::vector& keypoints_1, keypoints_2;

detector.detect( img_1, keypoints_1 );

detector.detect( img_2, keypoints_2 );

//-- Step 2: 描述SURF特徵

SurfDescriptorExtractor extractor;

Mat descriptors_1, descriptors_2;

extractor.compute( img_1, keypoints_1, descriptors_1 );

extractor.compute( img_2, keypoints_2, descriptors_2 );

//-- Step 3: 匹配

FlannBasedMatcher matcher;//BFMatcher為強制匹配

std::vector&< DMatch &> matches;

matcher.match( descriptors_1, descriptors_2, matches );

//取最大最小距離

double max_dist = 0; double min_dist = 100;

for( int i = 0; i &< descriptors_1.rows; i++ )

{

double dist = matches[i].distance;

if( dist &< min_dist ) min_dist = dist;

if( dist &> max_dist ) max_dist = dist;

}

std::vector&< DMatch &> good_matches;

for( int i = 0; i &< descriptors_1.rows; i++ )

{

if( matches[i].distance &<= 3*min_dist )//這裡的閾值選擇了3倍的min_dist

{

good_matches.push_back( matches[i]);

}

}

//-- Localize the object from img_1 in img_2

std::vector& obj;

std::vector& scene;

for( int i = 0; i &< (int)good_matches.size(); i++ )

{

//這裡採用「幀向拼接圖像中添加的方法」,因此左邊的是scene,右邊的是obj

scene.push_back( keypoints_1[ good_matches[i].queryIdx ].pt );

obj.push_back( keypoints_2[ good_matches[i].trainIdx ].pt );

}

//直接調用ransac,計算單應矩陣

Mat H = findHomography( obj, scene, CV_RANSAC );

//圖像對準

Mat resu<

warpPerspective(img_raw_2,result,H,Size(2*img_2.cols,img_2.rows));

Mat half(result,cv::Rect(0,0,img_2.cols,img_2.rows));

img_raw_1.copyTo(half);

imshow("result",result);

waitKey(0);

return 0;

}

依據論文中提到的3種方法進行融合

// raw_surf.cpp : 本例是對opencv-2.48相關例子的實現

//

#include "stdafx.h"

#include &

#include "opencv2/core/core.hpp"

#include "opencv2/imgproc/imgproc.hpp"

#include "opencv2/features2d/features2d.hpp"

#include "opencv2/highgui/highgui.hpp"

#include "opencv2/nonfree/features2d.hpp"

#include "opencv2/calib3d/calib3d.hpp"

usingnamespace std;

usingnamespace cv;

int main( int argc, char** argv )

{

Mat img_1 ;

Mat img_2 ;

Mat img_raw_1 = imread("c1.bmp");

Mat img_raw_2 = imread("c3.bmp");

cvtColor(img_raw_1,img_1,CV_BGR2GRAY);

cvtColor(img_raw_2,img_2,CV_BGR2GRAY);

//-- Step 1: 使用SURF識別出特徵點

int minHessian = 400;

SurfFeatureDetector detector( minHessian );

std::vector& keypoints_1, keypoints_2;

detector.detect( img_1, keypoints_1 );

detector.detect( img_2, keypoints_2 );

//-- Step 2: 描述SURF特徵

SurfDescriptorExtractor extractor;

Mat descriptors_1, descriptors_2;

extractor.compute( img_1, keypoints_1, descriptors_1 );

extractor.compute( img_2, keypoints_2, descriptors_2 );

//-- Step 3: 匹配

FlannBasedMatcher matcher;//BFMatcher為強制匹配

std::vector&< DMatch &> matches;

matcher.match( descriptors_1, descriptors_2, matches );

//取最大最小距離

double max_dist = 0; double min_dist = 100;

for( int i = 0; i &< descriptors_1.rows; i++ )

{

double dist = matches[i].distance;

if( dist &< min_dist ) min_dist = dist;

if( dist &> max_dist ) max_dist = dist;

}

std::vector&< DMatch &> good_matches;

for( int i = 0; i &< descriptors_1.rows; i++ )

{

if( matches[i].distance &<= 3*min_dist )//這裡的閾值選擇了3倍的min_dist

{

good_matches.push_back( matches[i]);

}

}

//-- Localize the object from img_1 in img_2

std::vector& obj;

std::vector& scene;

for( int i = 0; i &< (int)good_matches.size(); i++ )

{

//這裡採用「幀向拼接圖像中添加的方法」,因此左邊的是scene,右邊的是obj

scene.push_back( keypoints_1[ good_matches[i].queryIdx ].pt );

obj.push_back( keypoints_2[ good_matches[i].trainIdx ].pt );

}

//直接調用ransac,計算單應矩陣

Mat H = findHomography( obj, scene, CV_RANSAC );

//圖像對準

Mat resu<

Mat resultback; //保存的是新幀經過單應矩陣變換以後的圖像

warpPerspective(img_raw_2,result,H,Size(2*img_2.cols,img_2.rows));

result.copyTo(resultback);

Mat half(result,cv::Rect(0,0,img_2.cols,img_2.rows));

img_raw_1.copyTo(half);

imshow("ajust",result);

//漸入漸出融合

Mat result_linerblend = result.clone();

double dblend = 0.0;

int ioffset =img_2.cols-100;

for (int i = 0;i&<100;i++)

{

result_linerblend.col(ioffset+i) = result.col(ioffset+i)*(1-dblend) + resultback.col(ioffset+i)*dblend;

dblend = dblend +0.01;

}

imshow("result_linerblend",result_linerblend);

//最大值法融合

Mat result_maxvalue = result.clone();

for (int i = 0;i& {

for (int j=0;j&<100;j++)

{

int iresult= result.at&(i,ioffset+j)[0]+ result.at&(i,ioffset+j)[1]+ result.at&(i,ioffset+j)[2];

int iresultback = resultback.at&(i,ioffset+j)[0]+ resultback.at&(i,ioffset+j)[1]+ resultback.at&(i,ioffset+j)[2];

if (iresultback &>iresult)

{

result_maxvalue.at&(i,ioffset+j) = resultback.at&(i,ioffset+j);

}

}

}

imshow("result_maxvalue",result_maxvalue);

//帶閾值的加權平滑處理

Mat result_advance = result.clone();

for (int i = 0;i& {

for (int j = 0;j&<33;j++)

{

int iimg1= result.at&(i,ioffset+j)[0]+ result.at&(i,ioffset+j)[1]+ result.at&(i,ioffset+j)[2];

//int iimg2= resultback.at&(i,ioffset+j)[0]+ resultback.at&(i,ioffset+j)[1]+ resultback.at&(i,ioffset+j)[2];

int ilinerblend = result_linerblend.at&(i,ioffset+j)[0]+ result_linerblend.at&(i,ioffset+j)[1]+ result_linerblend.at&(i,ioffset+j)[2];

if (abs(iimg1 - ilinerblend)&<3)

{

result_advance.at&(i,ioffset+j) = result_linerblend.at&(i,ioffset+j);

}

}

}

for (int i = 0;i& {

for (int j = 33;j&<66;j++)

{

int iimg1= result.at&(i,ioffset+j)[0]+ result.at&(i,ioffset+j)[1]+ result.at&(i,ioffset+j)[2];

int iimg2= resultback.at&(i,ioffset+j)[0]+ resultback.at&(i,ioffset+j)[1]+ resultback.at&(i,ioffset+j)[2];

int ilinerblend = result_linerblend.at&(i,ioffset+j)[0]+ result_linerblend.at&(i,ioffset+j)[1]+ result_linerblend.at&(i,ioffset+j)[2];

if (abs(max(iimg1,iimg2) - ilinerblend)&<3)

{

result_advance.at&(i,ioffset+j) = result_linerblend.at&(i,ioffset+j);

}

elseif (iimg2&>iimg1)

{

result_advance.at&(i,ioffset+j) = resultback.at&(i,ioffset+j);

}

}

}

for (int i = 0;i& {

for (int j = 66;j&<100;j++)

{

//int iimg1= result.at&(i,ioffset+j)[0]+ result.at&(i,ioffset+j)[1]+ result.at&(i,ioffset+j)[2];

int iimg2= resultback.at&(i,ioffset+j)[0]+ resultback.at&(i,ioffset+j)[1]+ resultback.at&(i,ioffset+j)[2];

int ilinerblend = result_linerblend.at&(i,ioffset+j)[0]+ result_linerblend.at&(i,ioffset+j)[1]+ result_linerblend.at&(i,ioffset+j)[2];

if (abs(iimg2 - ilinerblend)&<3)

{

result_advance.at&(i,ioffset+j) = result_linerblend.at&(i,ioffset+j);

}

else

{

result_advance.at&(i,ioffset+j) = resultback.at&(i,ioffset+j);

}

}

}

imshow("result_advance",result_advance);

waitKey(0);

return 0;

}

目前看來,maxvalue是最好的融合方法,但是和論文中提到的一樣,此類圖片不能很好地體現融合演算法的特點,為此我也拍攝了和論文中類似的圖片。發現想拍攝質量較好的圖片,還是需要一定的硬體和技巧的。因此,軟體和硬體,在使用的過程中應該結合起來。

此外,使用文中圖片,效果如下

換一組圖片,可以發現不同的結果

相比較而言,還是linerblend能夠保持不錯的質量,而具體到底採取哪種拼接的方式,必須根據實際情況來選擇。

三、多圖連續融合拼接

前面處理的是2圖的例子,至少將這種情況推廣到3圖,這樣才能夠得到統一處理的經驗。

連續圖像處理,不僅僅是在已經處理好的圖像上面再添加一幅圖,其中比較關鍵的一點就是如何來處理已經拼接好的圖像。

那麼,m2也就是H.at&(0,2)就是水平位移。但是在實際使用中,始終無法正確取得這個值

Mat outImage =H.clone();

uchar* outData=outImage.ptr&(0);

int itemp = outData[2]; //獲得偏移

line(result_linerblend,Point(result_linerblend.cols-itemp,0),Point(result_linerblend.cols-itemp,img_2.rows),Scalar(255,255,255),2);

imshow("result_linerblend",result_linerblend);

只好採取編寫專門代碼的方法進行處理

//獲取已經處理圖像的邊界

Mat matmask = result_linerblend.clone();

int idaterow0 = 0;int idaterowend = 0;//標識了最上面和最小面第一個不為0的樹,這裡採用的是寬度減去的演算法

for(int j=matmask.cols-1;j&>=0;j--)

{

if (matmask.at&(0,j)[0]&>0)

{

idaterow0 = j;

break;

}

}

for(int j=matmask.cols-1;j&>=0;j--)

{

if (matmask.at&(matmask.rows-1,j)[0]&>0)

{

idaterowend = j;

break;

}

}

line(matmask,Point(min(idaterow0,idaterowend),0),Point(min(idaterow0,idaterowend),img_2.rows),Scalar(255,255,255),2);

imshow("result_linerblend",matmask);

效果良好穩定.目前的實現是將白線以左的區域切割下來進行拼接。

基於此,編寫3圖拼接,效果如下。目前的圖像質量,在差值上面可能還需要增強,下一步處理

// blend_series.cpp : 多圖拼接

//

#include "stdafx.h"

#include &

#include "opencv2/core/core.hpp"

#include "opencv2/imgproc/imgproc.hpp"

#include "opencv2/features2d/features2d.hpp"

#include "opencv2/highgui/highgui.hpp"

#include "opencv2/nonfree/features2d.hpp"

#include "opencv2/calib3d/calib3d.hpp"

usingnamespace std;

usingnamespace cv;

int main( int argc, char** argv )

{

Mat img_1 ;

Mat img_2 ;

Mat img_raw_1 = imread("Univ3.jpg");

Mat img_raw_2 = imread("Univ2.jpg");

cvtColor(img_raw_1,img_1,CV_BGR2GRAY);

cvtColor(img_raw_2,img_2,CV_BGR2GRAY);

//-- Step 1: 使用SURF識別出特徵點

int minHessian = 400;

SurfFeatureDetector detector( minHessian );

std::vector& keypoints_1, keypoints_2;

detector.detect( img_1, keypoints_1 );

detector.detect( img_2, keypoints_2 );

//-- Step 2: 描述SURF特徵

SurfDescriptorExtractor extractor;

Mat descriptors_1, descriptors_2;

extractor.compute( img_1, keypoints_1, descriptors_1 );

extractor.compute( img_2, keypoints_2, descriptors_2 );

//-- Step 3: 匹配

FlannBasedMatcher matcher;//BFMatcher為強制匹配

std::vector&< DMatch &> matches;

matcher.match( descriptors_1, descriptors_2, matches );

//取最大最小距離

double max_dist = 0; double min_dist = 100;

for( int i = 0; i &< descriptors_1.rows; i++ )

{

double dist = matches[i].distance;

if( dist &< min_dist ) min_dist = dist;

if( dist &> max_dist ) max_dist = dist;

}

std::vector&< DMatch &> good_matches;

for( int i = 0; i &< descriptors_1.rows; i++ )

{

if( matches[i].distance &<= 3*min_dist )//這裡的閾值選擇了3倍的min_dist

{

good_matches.push_back( matches[i]);

}

}

//-- Localize the object from img_1 in img_2

std::vector& obj;

std::vector& scene;

for( int i = 0; i &< (int)good_matches.size(); i++ )

{

//這裡採用「幀向拼接圖像中添加的方法」,因此左邊的是scene,右邊的是obj

scene.push_back( keypoints_1[ good_matches[i].queryIdx ].pt );

obj.push_back( keypoints_2[ good_matches[i].trainIdx ].pt );

}

//直接調用ransac,計算單應矩陣

Mat H = findHomography( obj, scene, CV_RANSAC );

//圖像對準

Mat resu<

Mat resultback; //保存的是新幀經過單應矩陣變換以後的圖像

warpPerspective(img_raw_2,result,H,Size(2*img_2.cols,img_2.rows));

result.copyTo(resultback);

Mat half(result,cv::Rect(0,0,img_2.cols,img_2.rows));

img_raw_1.copyTo(half);

//imshow("ajust",result);

//漸入漸出融合

Mat result_linerblend = result.clone();

double dblend = 0.0;

int ioffset =img_2.cols-100;

for (int i = 0;i&<100;i++)

{

result_linerblend.col(ioffset+i) = result.col(ioffset+i)*(1-dblend) + resultback.col(ioffset+i)*dblend;

dblend = dblend +0.01;

}

//獲取已經處理圖像的邊界

Mat matmask = result_linerblend.clone();

int idaterow0 = 0;int idaterowend = 0;//標識了最上面和最小面第一個不為0的樹,這裡採用的是寬度減去的演算法

for(int j=matmask.cols-1;j&>=0;j--)

{

if (matmask.at&(0,j)[0]&>0)

{

idaterow0 = j;

break;

}

}

for(int j=matmask.cols-1;j&>=0;j--)

{

if (matmask.at&(matmask.rows-1,j)[0]&>0)

{

idaterowend = j;

break;

}

}

line(matmask,Point(min(idaterow0,idaterowend),0),Point(min(idaterow0,idaterowend),img_2.rows),Scalar(255,255,255),2);

imshow("result_linerblend",matmask);

/////////////////---------------對結果圖像繼續處理---------------------------------/////////////////

img_raw_1 = result_linerblend(Rect(0,0,min(idaterow0,idaterowend),img_2.rows));

img_raw_2 = imread("Univ1.jpg");

cvtColor(img_raw_1,img_1,CV_BGR2GRAY);

cvtColor(img_raw_2,img_2,CV_BGR2GRAY);

////-- Step 1: 使用SURF識別出特徵點

//

SurfFeatureDetector detector2( minHessian );

keypoints_1.clear();

keypoints_2.clear();

detector2.detect( img_1, keypoints_1 );

detector2.detect( img_2, keypoints_2 );

//-- Step 2: 描述SURF特徵

SurfDescriptorExtractor extractor2;

extractor2.compute( img_1, keypoints_1, descriptors_1 );

extractor2.compute( img_2, keypoints_2, descriptors_2 );

//-- Step 3: 匹配

FlannBasedMatcher matcher2;//BFMatcher為強制匹配

matcher2.match( descriptors_1, descriptors_2, matches );

//取最大最小距離

max_dist = 0; min_dist = 100;

for( int i = 0; i &< descriptors_1.rows; i++ )

{

double dist = matches[i].distance;

if( dist &< min_dist ) min_dist = dist;

if( dist &> max_dist ) max_dist = dist;

}

good_matches.clear();

for( int i = 0; i &< descriptors_1.rows; i++ )

{

if( matches[i].distance &<= 3*min_dist )//這裡的閾值選擇了3倍的min_dist

{

good_matches.push_back( matches[i]);

}

}

//-- Localize the object from img_1 in img_2

obj.clear();

scene.clear();

for( int i = 0; i &< (int)good_matches.size(); i++ )

{

//這裡採用「幀向拼接圖像中添加的方法」,因此左邊的是scene,右邊的是obj

scene.push_back( keypoints_1[ good_matches[i].queryIdx ].pt );

obj.push_back( keypoints_2[ good_matches[i].trainIdx ].pt );

}

//直接調用ransac,計算單應矩陣

H = findHomography( obj, scene, CV_RANSAC );

//圖像對準

warpPerspective(img_raw_2,result,H,Size(img_1.cols+img_2.cols,img_2.rows));

result.copyTo(resultback);

Mat half2(result,cv::Rect(0,0,img_1.cols,img_1.rows));

img_raw_1.copyTo(half2);

imshow("ajust",result);

//漸入漸出融合

result_linerblend = result.clone();

dblend = 0.0;

ioffset =img_1.cols-100;

for (int i = 0;i&<100;i++)

{

result_linerblend.col(ioffset+i) = result.col(ioffset+i)*(1-dblend) + resultback.col(ioffset+i)*dblend;

dblend = dblend +0.01;

}

imshow("result_linerblend",result_linerblend);

waitKey(0);

return 0;

}

複製粘貼,實現5圖拼接。這個時候發現,3圖往往是一個極限值(這也可能就是為什麼opencv裡面的例子提供的是3圖),當第四圖出現的時候,其單應效果非常差

為什麼會出現這種情況,反思後認識到,論文中採用的是平面坐標,也就是所有的圖片都是基本位於一個平面上的,這一點特別通過她後面的那個羅技攝像頭的部署能夠看出來。但是在現實中,更常見的情況是人站在中間,360度地拍攝,這個時候需要採用柱面坐標系,也就是一開始對於圖像要進行相關處理,也就是所謂的柱狀投影。

可以得到這樣的效果,這個效果是否正確還有待商榷,但是基於此的確可以更進一步地做東西了。

// column_transoform.cpp : 桶裝投影

//

#include "stdafx.h"

#include &

#include "opencv2/core/core.hpp"

#include "opencv2/imgproc/imgproc.hpp"

#include "opencv2/features2d/features2d.hpp"

#include "opencv2/highgui/highgui.hpp"

#include "opencv2/nonfree/features2d.hpp"

#include "opencv2/calib3d/calib3d.hpp"

usingnamespace std;

usingnamespace cv;

#define PI 3.14159

int main( int argc, char** argv )

{

Mat img_1 = imread( "Univ1.jpg");

Mat img_result = img_1.clone();

for(int i=0;i& { for(int j=0;j& {

img_result.at&(i,j)=0;

}

}

int W = img_1.cols;

int H = img_1.rows;

float r = W/(2*tan(PI/6));

float k = 0;

float fx=0;

float fy=0;

for(int i=0;i& { for(int j=0;j& {

k = sqrt((float)(r*r+(W/2-j)*(W/2-j)));

fx = r*sin(PI/6)+r*sin(atan((j -W/2 )/r));

fy = H/2 +r*(i-H/2)/k;

int ix = (int)fx;

int iy = (int)fy;

if (ix&=0iy&=0)

{

img_result.at&(iy,ix)= img_1.at&(i,j);

}

}

}

imshow( "桶狀投影", img_1 );

imshow("img_result",img_result);

waitKey(0);

return 0;

}

效果依然是不佳,看來在這個地方,不僅僅是做一個桶形變換那麼簡單,一定有定量的參數在裡面,也可能是我的變換寫錯了。這個下一步研究。


sift 獲得一堆匹配對

篩選

……


推薦閱讀:

做增強現實AR,高通sdk與opencv有什麼區別。各有什麼利弊?
有什麼好的機器視覺相關網站推薦呢?
opencv Mat類型的轉換問題?
在圖像處理中用Mat比IplImage 除了不用自己管理內存,Mat有其他的優勢沒?請熟悉OpenCV和圖像處理的大牛指點
opencv和pcl的區別?

TAG:圖像處理 | OpenCV | 計算機視覺 |