导航:首页 > 编程语言 > 稠密光流法python代码

稠密光流法python代码

发布时间:2022-05-26 05:44:52

A. 光流法c++程序

看程序最后要得出opticalflow[i*width+j] = u;然后体现在output上就是黑白两种颜色,判断依据是看u的绝对值是否大于10
Et是当前像素和上一个图对应像素的灰度插
Ex是当前像素和同一行下一个像素的差值
Ey是当前像素和下一行同一位置像素灰度差值
按照公式计算出u

B. opencv svm怎么把特征转成svm中train函数需要的Mat类型

OpenCV开发SVM算法是基于LibSVM软件包开发的,LibSVM是台湾大学林智仁(Lin Chih-Jen)等开发设计的一个简单、易于使用和快速有效的SVM模式识别与回归的软件包。用OpenCV使用SVM算法的大概流程是1)设置训练样本集需要两组数据,一组是数据的类别,一组是数据的向量信息。2)设置SVM参数利用CvSVMParams类实现类内的成员变量svm_type表示SVM类型:CvSVM::C_SVC C-SVCCvSVM::NU_SVC v-SVCCvSVM::ONE_CLASS 一类SVMCvSVM::EPS_SVR e-SVRCvSVM::NU_SVR v-SVR成员变量kernel_type表示核函数的类型:CvSVM::LINEAR 线性:u‘vCvSVM::POLY 多项式:(r*u'v + coef0)^degreeCvSVM::RBF RBF函数:exp(-r|u-v|^2)CvSVM::SIGMOID sigmoid函数:tanh(r*u'v + coef0)成员变量degree针对多项式核函数degree的设置,gamma针对多项式/rbf/sigmoid核函数的设置,coef0针对多项式/sigmoid核函数的设置,Cvalue为损失函数,在C-SVC、e-SVR、v-SVR中有效,nu设置v-SVC、一类SVM和v-SVR参数,p为设置e-SVR中损失函数的值,class_weightsC_SVC的权重,term_crit为SVM训练过程的终止条件。其中默认值degree = 0,gamma = 1,coef0 = 0,Cvalue = 1,nu = 0,p = 0,class_weights = 03)训练SVM调用CvSVM::train函数建立SVM模型,第一个参数为训练数据,第二个参数为分类结果,最后一个参数即CvSVMParams4)用这个SVM进行分类调用函数CvSVM::predict实现分类5)获得支持向量除了分类,也可以得到SVM的支持向量,调用函数CvSVM::get_support_vector_count获得支持向量的个数,CvSVM::get_support_vector获得对应的索引编号的支持向量。实现代码如下:view plain// step 1: float labels[4] = {1.0, -1.0, -1.0, -1.0}; Mat labelsMat(3, 1, CV_32FC1, labels); float trainingData[4][2] = { {501, 10}, {255, 10}, {501, 255}, {10, 501} }; Mat trainingDataMat(3, 2, CV_32FC1, trainingData); // step 2: CvSVMParams params; params.svm_type = CvSVM::C_SVC; params.kernel_type = CvSVM::LINEAR; params.term_crit = cvTermCriteria(CV_TERMCRIT_ITER, 100, 1e-6); // step 3: CvSVM SVM; SVM.train(trainingDataMat, labelsMat, Mat(), Mat(), params); // step 4: Vec3b green(0, 255, 0), blue(255, 0, 0); for (int i=0; iimage.rows; i++) { for (int j=0; jimage.cols; j++) { Mat sampleMat = (Mat_float(1,2) i,j); float response = SVM.predict(sampleMat); if (fabs(response-1.0) 0.0001) { image.atVec3b(j, i) = green; } else if (fabs(response+1.0) 0.001) { image.atVec3b(j, i) = blue; } } } // step 5: int c = SVM.get_support_vector_count(); for (int i=0; i i++) { const float* v = SVM.get_support_vector(i); } OpenCV支持的目标检测的方法是利用样本的Haar特征进行的分类器训练,得到的级联boosted分类器(Cascade Classification)。注意,新版本的C++接口除了Haar特征以外也可以使用LBP特征。先介绍一下相关的结构,级联分类器的计算特征值的基础类FeatureEvaluator,功能包括读操作read、复制clone、获得特征类型getFeatureType,分配图片分配窗口的操作setImage、setWindow,计算有序特征calcOrd,计算绝对特征calcCat,创建分类器特征的结构create函数。级联分类器类CascadeClassifier。目标级联矩形的分组函数groupRectangles。接下来,我尝试使用CascadeClassifier这个级联分类器类检测视频流中的目标(haar支持的目标有人脸、人眼、嘴、鼻、身体。这里尝试比较成熟的人脸和眼镜)。用load函数加载XML分类器文件(目前提供的分类器包括Haar分类器和LBP分类器(LBP分类器数据较少))具体步骤如下:1)加载级联分类器调用CascadeClassifier类成员函数load实现,代码为:view plainCascadeClassifier face_cascade; face_cascade.load("haarcascade_frontalface_alt.xml"); 2)读取视频流这部分比较基础啦~~从文件中读取图像序列,读取视频文件,读取摄像头视频流看过我之前的文章,这3种方法应该了然于心。3)对每一帧使用该分类器这里先将图像变成灰度图,对它应用直方图均衡化,做一些预处理的工作。接下来检测人脸,调用detectMultiScale函数,该函数在输入图像的不同尺度中检测物体,参数image为输入的灰度图像,objects为得到被检测物体的矩形框向量组,scaleFactor为每一个图像尺度中的尺度参数,默认值为1.1,minNeighbors参数为每一个级联矩形应该保留的邻近个数(没能理解这个参数,-_-|||),默认为3,flags对于新的分类器没有用(但目前的haar分类器都是旧版的,CV_HAAR_DO_CANNY_PRUNING利用Canny边缘检测器来排除一些边缘很少或者很多的图像区域,CV_HAAR_SCALE_IMAGE就是按比例正常检测,CV_HAAR_FIND_BIGGEST_OBJECT只检测最大的物体,CV_HAAR_DO_ROUGH_SEARCH只做初略检测),默认为0.minSize和maxSize用来限制得到的目标区域的范围。这里调用的代码如下:view plainface_cascade.detectMultiScale( frame_gray, faces, 1.1, 2, 0|CV_HAAR_SCALE_IMAGE, Size(30, 30) ); 4)显示目标这个也比较简单,调用ellips函数将刚才得到的faces矩形框都显示出来更进一步,也可以在得到的每一幅人脸中得到人眼的位置,调用的分类器文件为haarcascade_eye_tree_eyeglasses.xml,先将脸部区域选为兴趣区域ROI,重复上诉步骤即可,这里就不详细介绍了。当然,感兴趣的朋友也可以试试其他的xml文件作为分类器玩一下啊,感觉LBP特征虽然xml文件的大小很小,但效果还可以,不过我没有做过多的测试。光说不练假把式,最后贴上效果图和源代码的下载地址代码下载地址:http://download.csdn.net/detail/yang_xian521/3800468OpenCV配套的教程Tutorials对于Video的部分,没有实例进行说明,我只能摸石头过河啦,之前试过一个camShift做目标检测,这次试一试光流法做运动估计。这里使用的光流法是比较常用的 Lucas-Kanade方法。对于光流法的原理,我就不过多介绍了,主要讲使用OpenCV如何实现。首先利用goodFeaturesToTrack函数得到图像中的强边界作为跟踪的特征点,接下来要调用calcOpticalFlowPyrLK函数,输入两幅连续的图像,并在第一幅图像里选择一组特征点,输出为这组点在下一幅图像中的位置。再把得到的跟踪结果过滤一下,去掉不好的特征点。再把特征点的跟踪路径标示出来。说着好简单哦~~程序的效果和代码下载http://download.csdn.net/detail/yang_xian521/3811478视频捕捉的对象中,背景通常保持不变。一般分析中关注移动的前景物体,威力提取出前景物体,需要建立背景的模型,将模型和当前帧进行比对检测前景物体。前景提取应用非常广泛,特别是在智能监控领域中。如果有不含前景物体的背景图片,提取前景的工作相对容易,只需要比对当前帧和背景图片的不同,调用函数absdiff实现。但是大多数情况,获得背景图片是不可能的,比如在复杂的场景下,或者有光线条件的变化。因此,就需要动态的变换背景。一种简单的办法是对所观察到的图片取平均,但这样做也有很多弊端,首先,这种办法在计算背景图片的前需要输入大量的图片,其次我们进行取平均的过程中不能有前景物体进入。所以一种相对好的办法是动态建立背景图片并实时更新。具体的实现过程主要分为两部分:一部分是调用absdiff函数找出当前图片和背景图片的区别,这之中使用了threshold函数去除为前景,当前图片像素与背景图片像素变化超过一定阈值的时候才认定其为前景;另一个工作是更新背景图片,调用函数accumulateWeighted,根据权重参数可以调整背景更新的速度,将当前图片更新到背景中,这里巧妙利用得到的前景提取结果作为mask,在更新背景图片的过程中避免了前景的干扰。程序效果如图,代码下载地址为http://download.csdn.net/detail/yang_xian521/3814878虽然可以调整阈值参数和权重更新速度调节前景提取的结果,但从测试视频可以发现,树叶的运动对结果的干扰还是不小的,特别对于第一帧出现前景的情况,由于后续更新背景都是对前景mask后对背景进行更新的,所以第一帧的前景部分对背景的影响因子很难被更新掉。这里提出一种改进的办法——混合高斯模型。可以使一个像素具有更多的信息,这样可以有效的减少类似树叶的不停飘动,水波的不停荡漾这种对前景的干扰。这个精密的算法比之前我所介绍的简单方法要复杂很多,不易实现。还好,OpenCV已经为我们做好了相关工作,将其封装在类BackgroundSubtractorMOG,使用起来非常方便。实现代码如下:view plainMat frame; Mat foreground; // 前景图片 namedWindow("Extracted Foreground"); // 混合高斯物体 BackgroundSubtractorMOG mog; bool stop(false); while (!stop) { if (!capture.read(frame)) { break; } // 更新背景图片并且输出前景 mog(frame, foreground, 0.01); // 输出的前景图片并不是2值图片,要处理一下显示 threshold(foreground, foreground, 128, 255, THRESH_BINARY_INV);

C. 光流法算法的matlab程序

你尝试建立一个新的脚本文件,单独运行这个函数,不要把它直接放在主函数中,子函数是需要调用的。

D. 求助!光流法quiver画图问题。

若图形的计算量太大,新、按需要把图像缩小,计算速度可以提高。
quiver画出来的是矢量图,电脑分辨率不够清晰,将图形放大,你会看的清晰,
还有方法验证,就是提取图像部分光流显示,你可以看到明显的箭头,通过Matlab的显示坐标,找到光流的所在区域

E. 光流法里面的对时间偏导数怎么求

主要针对做过变速的,比如abcd 这4帧,如果速度改变为50%。它就得计算多出来的部分怎么办。
如果是帧采样就是aabbccdd。这样直接显示出来。渲染速度最快
如果是帧混合,就是a a+b/2(也就是说不透明度50%各混合) b b+c/2 c c+d/2,对渲染速度有影响,但很小。
如果是光流法,就通过光流法算法,计算中间帧每一个像素的位移情况,然后根据具体画面位移变化生成一张新的画面插入到中需要不足的地方。 这个功能对于连续拍摄,画面变动不大的素材效果还是不错的,能达到变速后看不出来卡顿感的程度。但画面变化剧烈的,或者有剪辑画面的部分,就会惨不忍睹。开启这个渲染速度会慢一点。
所以这个可以根据具体情况选择。

F. 用C#可以写光流法吗

自己实现的一个光流算法,通过模式搜索匹配的方式计算相邻两张图片的平移量。

模式匹配:选择方块模式或者X形模式,在两个图片中,将该模式像素灰度值做差并求和,相差最小的认为上最匹配的。

多模式匹配:在图片中选择多个位置,检索最符合的位置,最后将多个位置的匹配结果作平均值。

经过测试,在草地、柏油路面、地毯等非规则图形的粗糙表面上表现良好。

// optical flow use Multi-Pattern-Match algrithm, use the big inner diff pattern to do multi pattern match, then averge the result// already implemented: square-pattern, X-patternclass COpticalFlow_MPM
{public:
COpticalFlow_MPM(){} virtual ~COpticalFlow_MPM(){} static bool AddImplementation(COpticalFlow_MPM* imp)
{ if(m_impNum < c_maxImpNum){
m_impTbl[m_impNum++] = imp; return true;
} return false;
} static void SetImageDimesion(int width, int height, int lineBytes)
{ for(int i = 0; i < m_impNum; ++i){
m_impTbl[i]->m_width = width;
m_impTbl[i]->m_height = height;
m_impTbl[i]->m_lineBytes = lineBytes;
m_impTbl[i]->GenerateSearchTable();
m_impTbl[i]->GeneratePatternTable();
}
} // auto choose the pattern to do optical flow
static void AutoOpticalFlow(uint8_t* image1, uint8_t* image2)
{
m_impTbl[m_impCurr]->calcOpticalFlow(image1, image2); // check if need switch pattern
static int s_goodCount = 0; static int s_badCount = 0; if(m_quality > 0){
s_goodCount++;
}else{
s_badCount++;
} if(s_goodCount + s_badCount > 30){ if(s_badCount * 2 > s_goodCount){
m_impCurr = m_impCurr < (m_impNum - 1) ? m_impCurr + 1 : 0;
}
s_goodCount = s_badCount = 0;
}
} // the result
static uint8_t m_quality; // 0 ~ 255, 0 means the optical flow is invalid.
static float m_offset_x; // unit is pixel
static float m_offset_y;protected: virtual const char* Name() = 0; virtual void GeneratePatternTable() = 0; // prepare the address offset tables, that can make the calculation simple and fast.
void GenerateSearchTable()
{ // generate the search offset from corresponding location to the max distance
int index = 0; int yNum, ay[2]; for (int dist = 1; dist <= c_searchD; ++dist){ for (int x = -dist; x <= dist; ++x){ // for each x, only have 1 or 2 dy choices.
ay[0] = dist - abs(x); if (ay[0] == 0){
yNum = 1;
} else{
yNum = 2;
ay[1] = -ay[0];
} for (int iy = 0; iy < yNum; ++iy){
m_searchOffsets[index++] = ay[iy] * m_lineBytes + x;
}
}
} // generate the watch points.
index = 0; int center = m_width * m_height / 2 + m_width / 2; for (int y = -c_watchN; y <= c_watchN; ++y){ for (int x = -c_watchN; x <= c_watchN; ++x){
m_watchPoints[index++] = center + y * c_watchG * m_lineBytes + x * c_watchG * m_width / m_height;
}
}
} void ResetResult()
{
m_quality = 0;
m_offset_x = 0;
m_offset_y = 0;
} void calcOpticalFlow(uint8_t* image1, uint8_t* image2)
{
ResetResult(); int betterStart; int matchedOffset; int x1, y1, x2, y2; int matchedCount = 0; int offset_x[c_watchS]; int offset_y[c_watchS]; for (int i = 0; i < c_watchS; ++i){ if (SearchMaxInnerDiff(image1, m_watchPoints[i], betterStart)){
int32_t minDiff = SearchBestMatch(image1 + betterStart, m_patternOffsets, c_patternS, image2, betterStart, matchedOffset); if (minDiff < c_patternS * c_rejectDiff){
x1 = betterStart % m_lineBytes; y1 = betterStart / m_lineBytes;
x2 = matchedOffset % m_lineBytes; y2 = matchedOffset / m_lineBytes;
m_offset_x += (x2 - x1);
m_offset_y += (y2 - y1);
offset_x[matchedCount] = (x2 - x1);
offset_y[matchedCount] = (y2 - y1);
matchedCount++;
}
}
} if (matchedCount >= 4){
m_offset_x /= matchedCount;
m_offset_y /= matchedCount; // calculate the variance, and use the variance to get the quality.
float varX = 0, varY = 0; for (int i = 0; i < matchedCount; ++i){
varX += (offset_x[i] - m_offset_x) * (offset_x[i] - m_offset_x);
varY += (offset_y[i] - m_offset_y) * (offset_y[i] - m_offset_y);
}
varX /= (matchedCount - 1);
varY /= (matchedCount - 1); float varMax = varX > varY ? varX : varY;
m_quality = (uint8_t)(varMax > 2 ? 0 : (2-varMax) * 255 / 2); if(m_quality == 0){
ResetResult();
}
}
} // get the pattern inner diff, the pattern is center of the area.
inline int32_t InnerDiff(const uint8_t* center, const int* patternPoints, const int patternSize)
{
int32_t sum = 0;
int32_t mean = 0; for (int i = 0; i < patternSize; ++i){
sum += center[patternPoints[i]];
}
mean = sum / patternSize;

int32_t sumDiff = 0; for (int i = 0; i < patternSize; ++i){
sumDiff += abs(center[patternPoints[i]] - mean);
} return sumDiff;
} // get the sum diff between two pattern, the pattern is the center of the area.
inline int32_t PatternDiff(const uint8_t* center1, const uint8_t* center2, const int* patternPoints, const int patternSize)
{
int32_t sumDiff = 0; for (int i = 0; i < patternSize; ++i){
sumDiff += abs(center1[patternPoints[i]] - center2[patternPoints[i]]);
} return sumDiff;
} // search the max inner diff location, image is the full image begining, the return value searchOffset is base on the image begining.
inline bool SearchMaxInnerDiff(const uint8_t* image, int searchStart, int& betterStart)
{ // if the inner diff is less than this number, cannot use this pattern to do search.
const int c_minInnerDiff = c_patternS * 4; const int c_acceptInnerDiff = c_patternS * 12; const uint8_t* searchCenter = image + searchStart;
int32_t currDiff = InnerDiff(searchCenter, m_patternOffsets, c_patternS);
int32_t maxDiff = currDiff;
betterStart = 0; for (int i = 0; i < c_searchS; ++i){
currDiff = InnerDiff(searchCenter + m_searchOffsets[i], m_patternOffsets, c_patternS); if (currDiff > maxDiff){
maxDiff = currDiff;
betterStart = m_searchOffsets[i];
} if (maxDiff > c_acceptInnerDiff){ break;
}
} if (maxDiff < c_minInnerDiff){ return false;
}

betterStart += searchStart; return true;
} // get the minnmum pattern diff with the 8 neighbors.
inline int32_t MinNeighborDiff(const uint8_t* pattern)
{ const int32_t threshDiff = c_patternS * c_acceptDiff; // eight neighbors of a pattern
const int neighborOffsets[8] = { -1, 1, -m_lineBytes, m_lineBytes, -m_lineBytes - 1, -m_lineBytes + 1, m_lineBytes - 1, m_lineBytes + 1 }; int minDiff = PatternDiff(pattern, pattern + neighborOffsets[0], m_patternOffsets, c_patternS); if (minDiff < threshDiff){ return minDiff;
} int diff; for (int i = 1; i < 8; ++i){
diff = PatternDiff(pattern, pattern + neighborOffsets[i], m_patternOffsets, c_patternS); if (diff < minDiff){
minDiff = diff; if (minDiff < threshDiff){ return minDiff;
}
}
} return minDiff;
} // search the pattern that have max min_diff with neighbors, image is the full image begining, the return value betterStart is base on the image begining.
inline bool SearchMaxNeighborDiff(const uint8_t* image, int searchStart, int& betterStart)
{ const uint8_t* searchCenter = image + searchStart;
int32_t currDiff = MinNeighborDiff(searchCenter);
int32_t maxDiff = currDiff;
betterStart = 0; for (int i = 0; i < c_searchS; ++i){
currDiff = MinNeighborDiff(searchCenter + m_searchOffsets[i]); if (currDiff > maxDiff){
maxDiff = currDiff;
betterStart = m_searchOffsets[i];
}
} if (maxDiff <= c_patternS * c_acceptDiff){ return false;
}

betterStart += searchStart; return true;
} // match the target pattern in the image, return the best match quality and matched offset; the pattern is the center, image is the full image begining.
inline int32_t SearchBestMatch(const uint8_t* target, const int* patternPoints, const int patternSize, const uint8_t* image, int searchStart, int& matchedOffset)
{ const int thinkMatchedDiff = patternSize * c_acceptDiff; const uint8_t* searchCenter = image + searchStart; const uint8_t* matched = searchCenter;
int32_t currDiff = PatternDiff(target, matched, patternPoints, patternSize);
int32_t minDiff = currDiff; for (int i = 0; i < c_searchS; ++i){
currDiff = PatternDiff(target, searchCenter + m_searchOffsets[i], patternPoints, patternSize); if (currDiff < minDiff){
minDiff = currDiff;
matched = searchCenter + m_searchOffsets[i];
} if (minDiff < thinkMatchedDiff){ break;
}
}

matchedOffset = matched - image; return minDiff;
} int m_width, m_height, m_lineBytes; static const int c_acceptDiff = 2; // if the average pixel error is less than this number, think already matched
static const int c_rejectDiff = 8; // if the average pixel error is larger than this number, think it's not matched // all address offset to the pattern key location, the size is according to the square pattern.
static const int c_patternN = 3; static const int c_patternS = (2 * c_patternN + 1) * (2 * c_patternN + 1); int m_patternOffsets[c_patternS]; // the offsets to the image start for each seed point, the match is around these seed points.
static const int c_watchN = 2; static const int c_watchS = (2 * c_watchN + 1) * (2 * c_watchN + 1); static const int c_watchG = 30; // The gap of the watch grid in height direction
int m_watchPoints[c_watchS]; // the search offset to the search center, match the pattern from the corresponding location to the max distance. (not include distance 0.)
static const int c_searchD = 10; // search street-distance from the key location
static const int c_searchS = 2 * c_searchD * c_searchD + 2 * c_searchD; int m_searchOffsets[c_searchS]; // The implements table that use various pattern
static int m_impCurr; static int m_impNum; static const int c_maxImpNum = 16; static COpticalFlow_MPM* m_impTbl[c_maxImpNum];
};// save the optical flow resultuint8_t COpticalFlow_MPM::m_quality; // 0 ~ 255, 0 means the optical flow is invalid.float COpticalFlow_MPM::m_offset_x; // unit is pixelfloat COpticalFlow_MPM::m_offset_y;// the implements that use different patternint COpticalFlow_MPM::m_impCurr = 0;int COpticalFlow_MPM::m_impNum = 0;
COpticalFlow_MPM* COpticalFlow_MPM::m_impTbl[COpticalFlow_MPM::c_maxImpNum];// Multi-Pattern-Match-Squareclass COpticalFlow_MPMS : public COpticalFlow_MPM
{public:
COpticalFlow_MPMS(){} virtual ~COpticalFlow_MPMS(){} virtual const char* Name() { return "Square"; }protected: // prepare the address offset tables, that can make the calculation simple and fast.
virtual void GeneratePatternTable()
{ // generate the address offset of the match area to the center of the area.
int index = 0; for (int y = -c_patternN; y <= c_patternN; ++y){ for (int x = -c_patternN; x <= c_patternN; ++x){
m_patternOffsets[index++] = y * m_lineBytes + x;
}
}
}
};// Multi-Pattern-Match-Xclass COpticalFlow_MPMX : public COpticalFlow_MPM
{public:
COpticalFlow_MPMX(){} virtual ~COpticalFlow_MPMX(){} virtual const char* Name() { return "X"; }protected: // prepare the address offset tables, that can make the calculation simple and fast.
virtual void GeneratePatternTable()
{ // generate the address offset of the match area to the center of the area.
int index = 0; int armLen = (c_patternS - 1) / 4; for (int y = -armLen; y <= armLen; ++y){ if(y == 0){
m_patternOffsets[index++] = 0;
}else{
m_patternOffsets[index++] = y * m_lineBytes - y;
m_patternOffsets[index++] = y * m_lineBytes + y;
}
}
}
};static COpticalFlow_MPMS of_mpms;static COpticalFlow_MPMX of_mpmx;void OpticalFlow::init()
{ // set the optical flow implementation table
COpticalFlow_MPM::AddImplementation(&of_mpms);
COpticalFlow_MPM::AddImplementation(&of_mpmx);
COpticalFlow_MPM::SetImageDimesion(m_width, m_height, m_lineBytes);
}

uint32_t OpticalFlow::flow_image_in(const uint8_t *buf, int len, uint8_t *quality, int32_t *centi_pixel_x, int32_t *centi_pixel_y)
{ static uint8_t s_imageBuff1[m_pixelNum]; static uint8_t s_imageBuff2[m_pixelNum]; static uint8_t* s_imagePre = NULL; static uint8_t* s_imageCurr = s_imageBuff1; *quality = 0; *centi_pixel_x = 0; *centi_pixel_y = 0;

memcpy(s_imageCurr, buf, len); // first image
if(s_imagePre == NULL){
s_imagePre = s_imageCurr;
s_imageCurr = s_imageCurr == s_imageBuff1 ? s_imageBuff2 : s_imageBuff1; // switch image buffer
return 0;
}

COpticalFlow_MPM::AutoOpticalFlow(s_imagePre, s_imageCurr); if(COpticalFlow_MPM::m_quality > 0){ *quality = COpticalFlow_MPM::m_quality; *centi_pixel_x = (int32_t)(COpticalFlow_MPM::m_offset_x * 100); *centi_pixel_y = (int32_t)(COpticalFlow_MPM::m_offset_y * 100);
}

s_imagePre = s_imageCurr;
s_imageCurr = s_imageCurr == s_imageBuff1 ? s_imageBuff2 : s_imageBuff1; // switch image buffer
return 0;
}

G. 基于光流法的运动检测的matlab程序,最好配有插图,采用经典的HS方法就最好了,万分感谢

H. 请求帮助 实现 背景差减法 、帧间差分法、光流法 的算法,C++或者matlab都行,现成的源码都行

建议用opencv库
环境搭建:
http://jingyan..com/album/2a138328497ce6074b134f64.html
矩阵操作:
http://blog.sina.com.cn/s/blog_afe2af380101bqhz.html

代码:
absdiff(frame, prveframe, differframe);//获取差分帧 differframe= frame-prveframe

I. 基于特征光流法的运动汽车视觉跟踪设计(非专业人士勿扰)

视觉跟踪其实就是利用图像处理技术分析出图像序列中运动的那个物体。
特征是必须的,没有特诊就没有跟踪的依据,但是可以选择的特征很丰富,原理各部相同没有相关性。你这里的光流法是利用时间上的统计特性,至于你所谓的特征光流法就看你自己做的什么东西什么要求了,别人不知道你要做什么,什么特征。
视频图像序列检测方法很多,无法回答
第四题大多的图像处理书上都有,建议你好好看看书
第五题更回答不出来了,原因前面说了。

基于你的问题,我认为,这个可能是硕士论文吧(除非是普通的背景生成可以给本科做做)!首先你要好好的看书,这些问题不应该来问而是自己学的,这是最基本的学习研究能力。第二,你的问题问的太大了,显得很不专业,你的问题找教授来也无从下口回答你。
建议你看数字图像处理的书,外面很多,冈萨雷斯的可能比较容易上手。讲解的比较通俗易懂,利于你掌握基本知识。
另外你应该在具体方法上大量查阅文献资料(不要说不会查),这些会是一系列的数学问题,别人帮不了你,即使你的导师(除非你是本科生,导师给你的基本都是有成型的东西)。你论文中必须详细描述的。

阅读全文

与稠密光流法python代码相关的资料

热点内容
qt下编译生成mqtt库 浏览:541
南京中兴招收专科程序员吗 浏览:297
代理商php源码 浏览:983
苹果手机怎么解压软件app 浏览:650
游戏资源被编译 浏览:152
代码编译后黑屏 浏览:8
程序员情侣写真 浏览:505
python3孪生素数 浏览:36
计算杨辉三角Python 浏览:404
linux目录重命名 浏览:196
算法设计的最终形态是代码 浏览:262
程序员社团招新横幅 浏览:238
拖鞋解压视频大全 浏览:887
租服务器主机链接软件叫什么 浏览:856
交叉编译工具的linux版本号 浏览:156
python开发应用软件 浏览:32
hdl综合器与c编译器的区别 浏览:899
编译原理最左推导代码 浏览:702
加密三 浏览:131
通过编译链接后形成的可执行程序 浏览:680