導航:首頁 > 編程語言 > 稠密光流法python代碼

稠密光流法python代碼

發布時間:2022-05-26 05:44:52

A. 光流法c++程序

看程序最後要得出opticalflow[i*width+j] = u;然後體現在output上就是黑白兩種顏色,判斷依據是看u的絕對值是否大於10
Et是當前像素和上一個圖對應像素的灰度插
Ex是當前像素和同一行下一個像素的差值
Ey是當前像素和下一行同一位置像素灰度差值
按照公式計算出u

B. opencv svm怎麼把特徵轉成svm中train函數需要的Mat類型

OpenCV開發SVM演算法是基於LibSVM軟體包開發的,LibSVM是台灣大學林智仁(Lin Chih-Jen)等開發設計的一個簡單、易於使用和快速有效的SVM模式識別與回歸的軟體包。用OpenCV使用SVM演算法的大概流程是1)設置訓練樣本集需要兩組數據,一組是數據的類別,一組是數據的向量信息。2)設置SVM參數利用CvSVMParams類實現類內的成員變數svm_type表示SVM類型:CvSVM::C_SVC C-SVCCvSVM::NU_SVC v-SVCCvSVM::ONE_CLASS 一類SVMCvSVM::EPS_SVR e-SVRCvSVM::NU_SVR v-SVR成員變數kernel_type表示核函數的類型:CvSVM::LINEAR 線性:u『vCvSVM::POLY 多項式:(r*u'v + coef0)^degreeCvSVM::RBF RBF函數:exp(-r|u-v|^2)CvSVM::SIGMOID sigmoid函數:tanh(r*u'v + coef0)成員變數degree針對多項式核函數degree的設置,gamma針對多項式/rbf/sigmoid核函數的設置,coef0針對多項式/sigmoid核函數的設置,Cvalue為損失函數,在C-SVC、e-SVR、v-SVR中有效,nu設置v-SVC、一類SVM和v-SVR參數,p為設置e-SVR中損失函數的值,class_weightsC_SVC的權重,term_crit為SVM訓練過程的終止條件。其中默認值degree = 0,gamma = 1,coef0 = 0,Cvalue = 1,nu = 0,p = 0,class_weights = 03)訓練SVM調用CvSVM::train函數建立SVM模型,第一個參數為訓練數據,第二個參數為分類結果,最後一個參數即CvSVMParams4)用這個SVM進行分類調用函數CvSVM::predict實現分類5)獲得支持向量除了分類,也可以得到SVM的支持向量,調用函數CvSVM::get_support_vector_count獲得支持向量的個數,CvSVM::get_support_vector獲得對應的索引編號的支持向量。實現代碼如下:view plain// step 1: float labels[4] = {1.0, -1.0, -1.0, -1.0}; Mat labelsMat(3, 1, CV_32FC1, labels); float trainingData[4][2] = { {501, 10}, {255, 10}, {501, 255}, {10, 501} }; Mat trainingDataMat(3, 2, CV_32FC1, trainingData); // step 2: CvSVMParams params; params.svm_type = CvSVM::C_SVC; params.kernel_type = CvSVM::LINEAR; params.term_crit = cvTermCriteria(CV_TERMCRIT_ITER, 100, 1e-6); // step 3: CvSVM SVM; SVM.train(trainingDataMat, labelsMat, Mat(), Mat(), params); // step 4: Vec3b green(0, 255, 0), blue(255, 0, 0); for (int i=0; iimage.rows; i++) { for (int j=0; jimage.cols; j++) { Mat sampleMat = (Mat_float(1,2) i,j); float response = SVM.predict(sampleMat); if (fabs(response-1.0) 0.0001) { image.atVec3b(j, i) = green; } else if (fabs(response+1.0) 0.001) { image.atVec3b(j, i) = blue; } } } // step 5: int c = SVM.get_support_vector_count(); for (int i=0; i i++) { const float* v = SVM.get_support_vector(i); } OpenCV支持的目標檢測的方法是利用樣本的Haar特徵進行的分類器訓練,得到的級聯boosted分類器(Cascade Classification)。注意,新版本的C++介面除了Haar特徵以外也可以使用LBP特徵。先介紹一下相關的結構,級聯分類器的計算特徵值的基礎類FeatureEvaluator,功能包括讀操作read、復制clone、獲得特徵類型getFeatureType,分配圖片分配窗口的操作setImage、setWindow,計算有序特徵calcOrd,計算絕對特徵calcCat,創建分類器特徵的結構create函數。級聯分類器類CascadeClassifier。目標級聯矩形的分組函數groupRectangles。接下來,我嘗試使用CascadeClassifier這個級聯分類器類檢測視頻流中的目標(haar支持的目標有人臉、人眼、嘴、鼻、身體。這里嘗試比較成熟的人臉和眼鏡)。用load函數載入XML分類器文件(目前提供的分類器包括Haar分類器和LBP分類器(LBP分類器數據較少))具體步驟如下:1)載入級聯分類器調用CascadeClassifier類成員函數load實現,代碼為:view plainCascadeClassifier face_cascade; face_cascade.load("haarcascade_frontalface_alt.xml"); 2)讀取視頻流這部分比較基礎啦~~從文件中讀取圖像序列,讀取視頻文件,讀取攝像頭視頻流看過我之前的文章,這3種方法應該瞭然於心。3)對每一幀使用該分類器這里先將圖像變成灰度圖,對它應用直方圖均衡化,做一些預處理的工作。接下來檢測人臉,調用detectMultiScale函數,該函數在輸入圖像的不同尺度中檢測物體,參數image為輸入的灰度圖像,objects為得到被檢測物體的矩形框向量組,scaleFactor為每一個圖像尺度中的尺度參數,默認值為1.1,minNeighbors參數為每一個級聯矩形應該保留的鄰近個數(沒能理解這個參數,-_-|||),默認為3,flags對於新的分類器沒有用(但目前的haar分類器都是舊版的,CV_HAAR_DO_CANNY_PRUNING利用Canny邊緣檢測器來排除一些邊緣很少或者很多的圖像區域,CV_HAAR_SCALE_IMAGE就是按比例正常檢測,CV_HAAR_FIND_BIGGEST_OBJECT只檢測最大的物體,CV_HAAR_DO_ROUGH_SEARCH只做初略檢測),默認為0.minSize和maxSize用來限製得到的目標區域的范圍。這里調用的代碼如下:view plainface_cascade.detectMultiScale( frame_gray, faces, 1.1, 2, 0|CV_HAAR_SCALE_IMAGE, Size(30, 30) ); 4)顯示目標這個也比較簡單,調用ellips函數將剛才得到的faces矩形框都顯示出來更進一步,也可以在得到的每一幅人臉中得到人眼的位置,調用的分類器文件為haarcascade_eye_tree_eyeglasses.xml,先將臉部區域選為興趣區域ROI,重復上訴步驟即可,這里就不詳細介紹了。當然,感興趣的朋友也可以試試其他的xml文件作為分類器玩一下啊,感覺LBP特徵雖然xml文件的大小很小,但效果還可以,不過我沒有做過多的測試。光說不練假把式,最後貼上效果圖和源代碼的下載地址代碼下載地址:http://download.csdn.net/detail/yang_xian521/3800468OpenCV配套的教程Tutorials對於Video的部分,沒有實例進行說明,我只能摸石頭過河啦,之前試過一個camShift做目標檢測,這次試一試光流法做運動估計。這里使用的光流法是比較常用的 Lucas-Kanade方法。對於光流法的原理,我就不過多介紹了,主要講使用OpenCV如何實現。首先利用goodFeaturesToTrack函數得到圖像中的強邊界作為跟蹤的特徵點,接下來要調用calcOpticalFlowPyrLK函數,輸入兩幅連續的圖像,並在第一幅圖像里選擇一組特徵點,輸出為這組點在下一幅圖像中的位置。再把得到的跟蹤結果過濾一下,去掉不好的特徵點。再把特徵點的跟蹤路徑標示出來。說著好簡單哦~~程序的效果和代碼下載http://download.csdn.net/detail/yang_xian521/3811478視頻捕捉的對象中,背景通常保持不變。一般分析中關注移動的前景物體,威力提取出前景物體,需要建立背景的模型,將模型和當前幀進行比對檢測前景物體。前景提取應用非常廣泛,特別是在智能監控領域中。如果有不含前景物體的背景圖片,提取前景的工作相對容易,只需要比對當前幀和背景圖片的不同,調用函數absdiff實現。但是大多數情況,獲得背景圖片是不可能的,比如在復雜的場景下,或者有光線條件的變化。因此,就需要動態的變換背景。一種簡單的辦法是對所觀察到的圖片取平均,但這樣做也有很多弊端,首先,這種辦法在計算背景圖片的前需要輸入大量的圖片,其次我們進行取平均的過程中不能有前景物體進入。所以一種相對好的辦法是動態建立背景圖片並實時更新。具體的實現過程主要分為兩部分:一部分是調用absdiff函數找出當前圖片和背景圖片的區別,這之中使用了threshold函數去除為前景,當前圖片像素與背景圖片像素變化超過一定閾值的時候才認定其為前景;另一個工作是更新背景圖片,調用函數accumulateWeighted,根據權重參數可以調整背景更新的速度,將當前圖片更新到背景中,這里巧妙利用得到的前景提取結果作為mask,在更新背景圖片的過程中避免了前景的干擾。程序效果如圖,代碼下載地址為http://download.csdn.net/detail/yang_xian521/3814878雖然可以調整閾值參數和權重更新速度調節前景提取的結果,但從測試視頻可以發現,樹葉的運動對結果的干擾還是不小的,特別對於第一幀出現前景的情況,由於後續更新背景都是對前景mask後對背景進行更新的,所以第一幀的前景部分對背景的影響因子很難被更新掉。這里提出一種改進的辦法——混合高斯模型。可以使一個像素具有更多的信息,這樣可以有效的減少類似樹葉的不停飄動,水波的不停盪漾這種對前景的干擾。這個精密的演算法比之前我所介紹的簡單方法要復雜很多,不易實現。還好,OpenCV已經為我們做好了相關工作,將其封裝在類BackgroundSubtractorMOG,使用起來非常方便。實現代碼如下:view plainMat frame; Mat foreground; // 前景圖片 namedWindow("Extracted Foreground"); // 混合高斯物體 BackgroundSubtractorMOG mog; bool stop(false); while (!stop) { if (!capture.read(frame)) { break; } // 更新背景圖片並且輸出前景 mog(frame, foreground, 0.01); // 輸出的前景圖片並不是2值圖片,要處理一下顯示 threshold(foreground, foreground, 128, 255, THRESH_BINARY_INV);

C. 光流法演算法的matlab程序

你嘗試建立一個新的腳本文件,單獨運行這個函數,不要把它直接放在主函數中,子函數是需要調用的。

D. 求助!光流法quiver畫圖問題。

若圖形的計算量太大,新、按需要把圖像縮小,計算速度可以提高。
quiver畫出來的是矢量圖,電腦解析度不夠清晰,將圖形放大,你會看的清晰,
還有方法驗證,就是提取圖像部分光流顯示,你可以看到明顯的箭頭,通過Matlab的顯示坐標,找到光流的所在區域

E. 光流法裡面的對時間偏導數怎麼求

主要針對做過變速的,比如abcd 這4幀,如果速度改變為50%。它就得計算多出來的部分怎麼辦。
如果是幀采樣就是aabbccdd。這樣直接顯示出來。渲染速度最快
如果是幀混合,就是a a+b/2(也就是說不透明度50%各混合) b b+c/2 c c+d/2,對渲染速度有影響,但很小。
如果是光流法,就通過光流法演算法,計算中間幀每一個像素的位移情況,然後根據具體畫面位移變化生成一張新的畫面插入到中需要不足的地方。 這個功能對於連續拍攝,畫面變動不大的素材效果還是不錯的,能達到變速後看不出來卡頓感的程度。但畫面變化劇烈的,或者有剪輯畫面的部分,就會慘不忍睹。開啟這個渲染速度會慢一點。
所以這個可以根據具體情況選擇。

F. 用C#可以寫光流法嗎

自己實現的一個光流演算法,通過模式搜索匹配的方式計算相鄰兩張圖片的平移量。

模式匹配:選擇方塊模式或者X形模式,在兩個圖片中,將該模式像素灰度值做差並求和,相差最小的認為上最匹配的。

多模式匹配:在圖片中選擇多個位置,檢索最符合的位置,最後將多個位置的匹配結果作平均值。

經過測試,在草地、柏油路面、地毯等非規則圖形的粗糙表面上表現良好。

// optical flow use Multi-Pattern-Match algrithm, use the big inner diff pattern to do multi pattern match, then averge the result// already implemented: square-pattern, X-patternclass COpticalFlow_MPM
{public:
COpticalFlow_MPM(){} virtual ~COpticalFlow_MPM(){} static bool AddImplementation(COpticalFlow_MPM* imp)
{ if(m_impNum < c_maxImpNum){
m_impTbl[m_impNum++] = imp; return true;
} return false;
} static void SetImageDimesion(int width, int height, int lineBytes)
{ for(int i = 0; i < m_impNum; ++i){
m_impTbl[i]->m_width = width;
m_impTbl[i]->m_height = height;
m_impTbl[i]->m_lineBytes = lineBytes;
m_impTbl[i]->GenerateSearchTable();
m_impTbl[i]->GeneratePatternTable();
}
} // auto choose the pattern to do optical flow
static void AutoOpticalFlow(uint8_t* image1, uint8_t* image2)
{
m_impTbl[m_impCurr]->calcOpticalFlow(image1, image2); // check if need switch pattern
static int s_goodCount = 0; static int s_badCount = 0; if(m_quality > 0){
s_goodCount++;
}else{
s_badCount++;
} if(s_goodCount + s_badCount > 30){ if(s_badCount * 2 > s_goodCount){
m_impCurr = m_impCurr < (m_impNum - 1) ? m_impCurr + 1 : 0;
}
s_goodCount = s_badCount = 0;
}
} // the result
static uint8_t m_quality; // 0 ~ 255, 0 means the optical flow is invalid.
static float m_offset_x; // unit is pixel
static float m_offset_y;protected: virtual const char* Name() = 0; virtual void GeneratePatternTable() = 0; // prepare the address offset tables, that can make the calculation simple and fast.
void GenerateSearchTable()
{ // generate the search offset from corresponding location to the max distance
int index = 0; int yNum, ay[2]; for (int dist = 1; dist <= c_searchD; ++dist){ for (int x = -dist; x <= dist; ++x){ // for each x, only have 1 or 2 dy choices.
ay[0] = dist - abs(x); if (ay[0] == 0){
yNum = 1;
} else{
yNum = 2;
ay[1] = -ay[0];
} for (int iy = 0; iy < yNum; ++iy){
m_searchOffsets[index++] = ay[iy] * m_lineBytes + x;
}
}
} // generate the watch points.
index = 0; int center = m_width * m_height / 2 + m_width / 2; for (int y = -c_watchN; y <= c_watchN; ++y){ for (int x = -c_watchN; x <= c_watchN; ++x){
m_watchPoints[index++] = center + y * c_watchG * m_lineBytes + x * c_watchG * m_width / m_height;
}
}
} void ResetResult()
{
m_quality = 0;
m_offset_x = 0;
m_offset_y = 0;
} void calcOpticalFlow(uint8_t* image1, uint8_t* image2)
{
ResetResult(); int betterStart; int matchedOffset; int x1, y1, x2, y2; int matchedCount = 0; int offset_x[c_watchS]; int offset_y[c_watchS]; for (int i = 0; i < c_watchS; ++i){ if (SearchMaxInnerDiff(image1, m_watchPoints[i], betterStart)){
int32_t minDiff = SearchBestMatch(image1 + betterStart, m_patternOffsets, c_patternS, image2, betterStart, matchedOffset); if (minDiff < c_patternS * c_rejectDiff){
x1 = betterStart % m_lineBytes; y1 = betterStart / m_lineBytes;
x2 = matchedOffset % m_lineBytes; y2 = matchedOffset / m_lineBytes;
m_offset_x += (x2 - x1);
m_offset_y += (y2 - y1);
offset_x[matchedCount] = (x2 - x1);
offset_y[matchedCount] = (y2 - y1);
matchedCount++;
}
}
} if (matchedCount >= 4){
m_offset_x /= matchedCount;
m_offset_y /= matchedCount; // calculate the variance, and use the variance to get the quality.
float varX = 0, varY = 0; for (int i = 0; i < matchedCount; ++i){
varX += (offset_x[i] - m_offset_x) * (offset_x[i] - m_offset_x);
varY += (offset_y[i] - m_offset_y) * (offset_y[i] - m_offset_y);
}
varX /= (matchedCount - 1);
varY /= (matchedCount - 1); float varMax = varX > varY ? varX : varY;
m_quality = (uint8_t)(varMax > 2 ? 0 : (2-varMax) * 255 / 2); if(m_quality == 0){
ResetResult();
}
}
} // get the pattern inner diff, the pattern is center of the area.
inline int32_t InnerDiff(const uint8_t* center, const int* patternPoints, const int patternSize)
{
int32_t sum = 0;
int32_t mean = 0; for (int i = 0; i < patternSize; ++i){
sum += center[patternPoints[i]];
}
mean = sum / patternSize;

int32_t sumDiff = 0; for (int i = 0; i < patternSize; ++i){
sumDiff += abs(center[patternPoints[i]] - mean);
} return sumDiff;
} // get the sum diff between two pattern, the pattern is the center of the area.
inline int32_t PatternDiff(const uint8_t* center1, const uint8_t* center2, const int* patternPoints, const int patternSize)
{
int32_t sumDiff = 0; for (int i = 0; i < patternSize; ++i){
sumDiff += abs(center1[patternPoints[i]] - center2[patternPoints[i]]);
} return sumDiff;
} // search the max inner diff location, image is the full image begining, the return value searchOffset is base on the image begining.
inline bool SearchMaxInnerDiff(const uint8_t* image, int searchStart, int& betterStart)
{ // if the inner diff is less than this number, cannot use this pattern to do search.
const int c_minInnerDiff = c_patternS * 4; const int c_acceptInnerDiff = c_patternS * 12; const uint8_t* searchCenter = image + searchStart;
int32_t currDiff = InnerDiff(searchCenter, m_patternOffsets, c_patternS);
int32_t maxDiff = currDiff;
betterStart = 0; for (int i = 0; i < c_searchS; ++i){
currDiff = InnerDiff(searchCenter + m_searchOffsets[i], m_patternOffsets, c_patternS); if (currDiff > maxDiff){
maxDiff = currDiff;
betterStart = m_searchOffsets[i];
} if (maxDiff > c_acceptInnerDiff){ break;
}
} if (maxDiff < c_minInnerDiff){ return false;
}

betterStart += searchStart; return true;
} // get the minnmum pattern diff with the 8 neighbors.
inline int32_t MinNeighborDiff(const uint8_t* pattern)
{ const int32_t threshDiff = c_patternS * c_acceptDiff; // eight neighbors of a pattern
const int neighborOffsets[8] = { -1, 1, -m_lineBytes, m_lineBytes, -m_lineBytes - 1, -m_lineBytes + 1, m_lineBytes - 1, m_lineBytes + 1 }; int minDiff = PatternDiff(pattern, pattern + neighborOffsets[0], m_patternOffsets, c_patternS); if (minDiff < threshDiff){ return minDiff;
} int diff; for (int i = 1; i < 8; ++i){
diff = PatternDiff(pattern, pattern + neighborOffsets[i], m_patternOffsets, c_patternS); if (diff < minDiff){
minDiff = diff; if (minDiff < threshDiff){ return minDiff;
}
}
} return minDiff;
} // search the pattern that have max min_diff with neighbors, image is the full image begining, the return value betterStart is base on the image begining.
inline bool SearchMaxNeighborDiff(const uint8_t* image, int searchStart, int& betterStart)
{ const uint8_t* searchCenter = image + searchStart;
int32_t currDiff = MinNeighborDiff(searchCenter);
int32_t maxDiff = currDiff;
betterStart = 0; for (int i = 0; i < c_searchS; ++i){
currDiff = MinNeighborDiff(searchCenter + m_searchOffsets[i]); if (currDiff > maxDiff){
maxDiff = currDiff;
betterStart = m_searchOffsets[i];
}
} if (maxDiff <= c_patternS * c_acceptDiff){ return false;
}

betterStart += searchStart; return true;
} // match the target pattern in the image, return the best match quality and matched offset; the pattern is the center, image is the full image begining.
inline int32_t SearchBestMatch(const uint8_t* target, const int* patternPoints, const int patternSize, const uint8_t* image, int searchStart, int& matchedOffset)
{ const int thinkMatchedDiff = patternSize * c_acceptDiff; const uint8_t* searchCenter = image + searchStart; const uint8_t* matched = searchCenter;
int32_t currDiff = PatternDiff(target, matched, patternPoints, patternSize);
int32_t minDiff = currDiff; for (int i = 0; i < c_searchS; ++i){
currDiff = PatternDiff(target, searchCenter + m_searchOffsets[i], patternPoints, patternSize); if (currDiff < minDiff){
minDiff = currDiff;
matched = searchCenter + m_searchOffsets[i];
} if (minDiff < thinkMatchedDiff){ break;
}
}

matchedOffset = matched - image; return minDiff;
} int m_width, m_height, m_lineBytes; static const int c_acceptDiff = 2; // if the average pixel error is less than this number, think already matched
static const int c_rejectDiff = 8; // if the average pixel error is larger than this number, think it's not matched // all address offset to the pattern key location, the size is according to the square pattern.
static const int c_patternN = 3; static const int c_patternS = (2 * c_patternN + 1) * (2 * c_patternN + 1); int m_patternOffsets[c_patternS]; // the offsets to the image start for each seed point, the match is around these seed points.
static const int c_watchN = 2; static const int c_watchS = (2 * c_watchN + 1) * (2 * c_watchN + 1); static const int c_watchG = 30; // The gap of the watch grid in height direction
int m_watchPoints[c_watchS]; // the search offset to the search center, match the pattern from the corresponding location to the max distance. (not include distance 0.)
static const int c_searchD = 10; // search street-distance from the key location
static const int c_searchS = 2 * c_searchD * c_searchD + 2 * c_searchD; int m_searchOffsets[c_searchS]; // The implements table that use various pattern
static int m_impCurr; static int m_impNum; static const int c_maxImpNum = 16; static COpticalFlow_MPM* m_impTbl[c_maxImpNum];
};// save the optical flow resultuint8_t COpticalFlow_MPM::m_quality; // 0 ~ 255, 0 means the optical flow is invalid.float COpticalFlow_MPM::m_offset_x; // unit is pixelfloat COpticalFlow_MPM::m_offset_y;// the implements that use different patternint COpticalFlow_MPM::m_impCurr = 0;int COpticalFlow_MPM::m_impNum = 0;
COpticalFlow_MPM* COpticalFlow_MPM::m_impTbl[COpticalFlow_MPM::c_maxImpNum];// Multi-Pattern-Match-Squareclass COpticalFlow_MPMS : public COpticalFlow_MPM
{public:
COpticalFlow_MPMS(){} virtual ~COpticalFlow_MPMS(){} virtual const char* Name() { return "Square"; }protected: // prepare the address offset tables, that can make the calculation simple and fast.
virtual void GeneratePatternTable()
{ // generate the address offset of the match area to the center of the area.
int index = 0; for (int y = -c_patternN; y <= c_patternN; ++y){ for (int x = -c_patternN; x <= c_patternN; ++x){
m_patternOffsets[index++] = y * m_lineBytes + x;
}
}
}
};// Multi-Pattern-Match-Xclass COpticalFlow_MPMX : public COpticalFlow_MPM
{public:
COpticalFlow_MPMX(){} virtual ~COpticalFlow_MPMX(){} virtual const char* Name() { return "X"; }protected: // prepare the address offset tables, that can make the calculation simple and fast.
virtual void GeneratePatternTable()
{ // generate the address offset of the match area to the center of the area.
int index = 0; int armLen = (c_patternS - 1) / 4; for (int y = -armLen; y <= armLen; ++y){ if(y == 0){
m_patternOffsets[index++] = 0;
}else{
m_patternOffsets[index++] = y * m_lineBytes - y;
m_patternOffsets[index++] = y * m_lineBytes + y;
}
}
}
};static COpticalFlow_MPMS of_mpms;static COpticalFlow_MPMX of_mpmx;void OpticalFlow::init()
{ // set the optical flow implementation table
COpticalFlow_MPM::AddImplementation(&of_mpms);
COpticalFlow_MPM::AddImplementation(&of_mpmx);
COpticalFlow_MPM::SetImageDimesion(m_width, m_height, m_lineBytes);
}

uint32_t OpticalFlow::flow_image_in(const uint8_t *buf, int len, uint8_t *quality, int32_t *centi_pixel_x, int32_t *centi_pixel_y)
{ static uint8_t s_imageBuff1[m_pixelNum]; static uint8_t s_imageBuff2[m_pixelNum]; static uint8_t* s_imagePre = NULL; static uint8_t* s_imageCurr = s_imageBuff1; *quality = 0; *centi_pixel_x = 0; *centi_pixel_y = 0;

memcpy(s_imageCurr, buf, len); // first image
if(s_imagePre == NULL){
s_imagePre = s_imageCurr;
s_imageCurr = s_imageCurr == s_imageBuff1 ? s_imageBuff2 : s_imageBuff1; // switch image buffer
return 0;
}

COpticalFlow_MPM::AutoOpticalFlow(s_imagePre, s_imageCurr); if(COpticalFlow_MPM::m_quality > 0){ *quality = COpticalFlow_MPM::m_quality; *centi_pixel_x = (int32_t)(COpticalFlow_MPM::m_offset_x * 100); *centi_pixel_y = (int32_t)(COpticalFlow_MPM::m_offset_y * 100);
}

s_imagePre = s_imageCurr;
s_imageCurr = s_imageCurr == s_imageBuff1 ? s_imageBuff2 : s_imageBuff1; // switch image buffer
return 0;
}

G. 基於光流法的運動檢測的matlab程序,最好配有插圖,採用經典的HS方法就最好了,萬分感謝

H. 請求幫助 實現 背景差減法 、幀間差分法、光流法 的演算法,C++或者matlab都行,現成的源碼都行

建議用opencv庫
環境搭建:
http://jingyan..com/album/2a138328497ce6074b134f64.html
矩陣操作:
http://blog.sina.com.cn/s/blog_afe2af380101bqhz.html

代碼:
absdiff(frame, prveframe, differframe);//獲取差分幀 differframe= frame-prveframe

I. 基於特徵光流法的運動汽車視覺跟蹤設計(非專業人士勿擾)

視覺跟蹤其實就是利用圖像處理技術分析出圖像序列中運動的那個物體。
特徵是必須的,沒有特診就沒有跟蹤的依據,但是可以選擇的特徵很豐富,原理各部相同沒有相關性。你這里的光流法是利用時間上的統計特性,至於你所謂的特徵光流法就看你自己做的什麼東西什麼要求了,別人不知道你要做什麼,什麼特徵。
視頻圖像序列檢測方法很多,無法回答
第四題大多的圖像處理書上都有,建議你好好看看書
第五題更回答不出來了,原因前面說了。

基於你的問題,我認為,這個可能是碩士論文吧(除非是普通的背景生成可以給本科做做)!首先你要好好的看書,這些問題不應該來問而是自己學的,這是最基本的學習研究能力。第二,你的問題問的太大了,顯得很不專業,你的問題找教授來也無從下口回答你。
建議你看數字圖像處理的書,外面很多,岡薩雷斯的可能比較容易上手。講解的比較通俗易懂,利於你掌握基本知識。
另外你應該在具體方法上大量查閱文獻資料(不要說不會查),這些會是一系列的數學問題,別人幫不了你,即使你的導師(除非你是本科生,導師給你的基本都是有成型的東西)。你論文中必須詳細描述的。

閱讀全文

與稠密光流法python代碼相關的資料

熱點內容
三星s8加密視頻 瀏覽:254
python內置庫的使用 瀏覽:785
udid定製源碼 瀏覽:177
全部編譯後標簽的軟元件 瀏覽:858
ida反編譯和od 瀏覽:858
悲憫pdf 瀏覽:745
蘋果怎麼退款app 瀏覽:275
進化演算法屬於智能演算法嗎 瀏覽:146
騰訊雲伺服器內存不夠自動重啟 瀏覽:228
編譯器c語言輸入中文 瀏覽:452
ps4雲伺服器初始化 瀏覽:360
數控車床編程加工視頻 瀏覽:245
程序員在公司受到委屈 瀏覽:783
玩和平精英顯示連接不到伺服器怎麼辦 瀏覽:705
安卓如何一步安裝軟體 瀏覽:493
雲服開我的世界伺服器標配 瀏覽:170
列印機的分配演算法 瀏覽:634
新加坡伺服器怎麼進 瀏覽:620
上海女程序員上班被偷 瀏覽:377
如何添加後台app 瀏覽:350