回答此问题可获得 20 贡献值,回答如果被采纳可获得 50 分。
<p>基于一个来自被动立体相机系统的视差矩阵,我需要计算一个v视差表示,用于使用OpenCV进行障碍物检测。在</p>
<p>一个有效的实现并不是问题所在。问题是要做得快。。。在</p>
<p>(一)v视差参考文献:Labayrade,R.和Aubert,D.和Tarel,J.p.
通过v视差表示在非平坦道路几何体上进行立体视觉中的实时障碍物检测</p>
<p>简而言之,要得到v视差(图<a href="https://i.stack.imgur.com/fPvsg.png" rel="nofollow noreferrer">1</a>),基本的是分析视差矩阵的行(图<a href="https://i.stack.imgur.com/YXYvf.png" rel="nofollow noreferrer">2</a>),并将结果表示为视差值上每行的直方图。u-视差(图<a href="https://i.stack.imgur.com/R0qSt.png" rel="nofollow noreferrer">3</a>)在视差矩阵的列上是相同的。(所有数字均为假彩色。)</p>
<>我在Python和C++中实现了“相同”。Python中的速度是可以接受的,但是在C++中,我得到U和V视差大约一秒半(0.5秒)。在</p>
<p><em>(1。编辑:由于单独的时间测量,只有计算u-直方图需要大量的时间…</em></p>
<p>这就引出了以下问题:</p>
<ol>
<li><p>是否可以避免柱状图逐行计算的循环?在OpenCV中调用一个<code>calcHist</code>-函数有什么“诀窍”吗?也许是尺寸?</p></li>
<L> >P>是C++中的坏编码,运行时问题与计算循环无关吗?</p></li>
</ol>
<p>谢谢大家</p>
<hr/>
<p>Python中的工作实现:</p>
<pre><code>#!/usr/bin/env python2
#-*- coding: utf-8 -*-
#
# THIS SOURCE-CODE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED. IN NO EVENT WILL THE AUTHOR BE HELD LIABLE FOR ANY DAMAGES ARISING FROM
# THE USE OF THIS SOURCE-CODE. USE AT YOUR OWN RISK.
import cv2
import numpy as np
import time
def draw_object(image, x, y, width=50, height=100):
color = image[y, x]
image[y-height:y, x-width//2:x+width//2] = color
IMAGE_HEIGHT = 600
IMAGE_WIDTH = 800
while True:
max_disp = 200
# create fake disparity
image = np.zeros((IMAGE_HEIGHT, IMAGE_WIDTH), np.uint8)
for c in range(IMAGE_HEIGHT)[::-1]:
image[c, ...] = int(float(c) / IMAGE_HEIGHT * max_disp)
draw_object(image, 275, 175)
draw_object(image, 300, 200)
draw_object(image, 100, 350)
# calculate v-disparity
vhist_vis = np.zeros((IMAGE_HEIGHT, max_disp), np.float)
for i in range(IMAGE_HEIGHT):
vhist_vis[i, ...] = cv2.calcHist(images=[image[i, ...]], channels=[0], mask=None, histSize=[max_disp],
ranges=[0, max_disp]).flatten() / float(IMAGE_HEIGHT)
vhist_vis = np.array(vhist_vis * 255, np.uint8)
vblack_mask = vhist_vis < 5
vhist_vis = cv2.applyColorMap(vhist_vis, cv2.COLORMAP_JET)
vhist_vis[vblack_mask] = 0
# calculate u-disparity
uhist_vis = np.zeros((max_disp, IMAGE_WIDTH), np.float)
for i in range(IMAGE_WIDTH):
uhist_vis[..., i] = cv2.calcHist(images=[image[..., i]], channels=[0], mask=None, histSize=[max_disp],
ranges=[0, max_disp]).flatten() / float(IMAGE_WIDTH)
uhist_vis = np.array(uhist_vis * 255, np.uint8)
ublack_mask = uhist_vis < 5
uhist_vis = cv2.applyColorMap(uhist_vis, cv2.COLORMAP_JET)
uhist_vis[ublack_mask] = 0
image = cv2.applyColorMap(image, cv2.COLORMAP_JET)
cv2.imshow('image', image)
cv2.imshow('vhist_vis', vhist_vis)
cv2.imshow('uhist_vis', uhist_vis)
cv2.imwrite('disparity_image.png', image)
cv2.imwrite('v-disparity.png', vhist_vis)
cv2.imwrite('u-disparity.png', uhist_vis)
if chr(cv2.waitKey(0)&255) == 'q':
break
</code></pre>
<hr/>
C++中的工作实现:</P>
^{pr2}$
<hr/>
<p><a href="https://i.stack.imgur.com/fPvsg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fPvsg.png" alt="enter image description here"/></a>
图1:v-视差</p>
<p><a href="https://i.stack.imgur.com/YXYvf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YXYvf.png" alt="fake disparity matrix"/></a>
图2:假视差矩阵</p>
<p><a href="https://i.stack.imgur.com/R0qSt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R0qSt.png" alt="enter image description here"/></a>
图3:u形视差</p>
<hr/>
<ol>
<li>编辑:
<ul>
<li>在c++示例中,u和v视差和独立时间测量的正确名称</li>
<li>小字体</li>
</ul></li>
</ol>