<p>关于解释错误。它来自<a href="https://github.com/opencv/opencv/blob/3.4.1/modules/imgproc/src/hough.cpp#L1659" rel="nofollow noreferrer">hough.cpp#L1659</a>:</p>
<pre><code>CV_Assert(!_image.empty() && _image.type() == CV_8UC1 && (_image.isMat() || _image.isUMat()));
</code></pre>
<p>分解后,必须满足以下所有条件:</p>
<ul>
<li><code>!_image.empty()</code>:输入图像不应为空</li>
<li><code>_image.type() == CV_8UC1</code>:输入图像必须是<code>8U</code>(8位无符号,<code>np.uint8</code>)和{<cd5>}(单通道)</li>
<li><code>_image.isMat() || _image.isUMat()</code>:检查输入是<code>Mat</code>还是{<cd8>}(在Python中,它必须是一个numpy数组)</li>
</ul>
<p>关于您的特定错误消息(<code>error: (-215) !_image.empty() && _image.type() == (((0) & ((1 << 3) - 1)) + (((1)-1) << 3)) && (_image.isMat() || _image.isUMat())</code>):</p>
<ul>
<li>错误代码(-215)来自<a href="https://github.com/opencv/opencv/blob/3.4.1/modules/core/include/opencv2/core/base.hpp#L115" rel="nofollow noreferrer">here</a>:它是通用的<code>CV_StsAssert</code></li>
<li>然后这些数字:它们代表<code>CV_8UC1</code>。你想知道为什么吗?你应该:)我们开始吧:
<ol>
<li><code>CV_8UC1</code></li>
<li><code>CV_MAKETYPE(CV_8U,1)</code>::来自<a href="https://github.com/opencv/opencv/blob/3.4.1/modules/core/include/opencv2/core/hal/interface.h#L86" rel="nofollow noreferrer">^{<cd14>}</a></li>
<li><code>CV_MAKETYPE(0,1)</code>::来自<a href="https://github.com/opencv/opencv/blob/3.4.1/modules/core/include/opencv2/core/hal/interface.h#L71" rel="nofollow noreferrer">^{<cd16>}</a></li>
<li><code>(CV_MAT_DEPTH(0) + (((1)-1) << CV_CN_SHIFT))</code>::从<a href="https://github.com/opencv/opencv/blob/3.4.1/modules/core/include/opencv2/core/hal/interface.h#L83" rel="nofollow noreferrer">^{<cd18>}</a></li>
<li><code>(((0) & CV_MAT_DEPTH_MASK) + (((1)-1) << CV_CN_SHIFT))</code>::来自<a href="https://github.com/opencv/opencv/blob/3.4.1/modules/core/include/opencv2/core/hal/interface.h#L81" rel="nofollow noreferrer">^{<cd20>}</a></li>
<li><code>(((0) & (CV_DEPTH_MAX - 1)) + (((1)-1) << CV_CN_SHIFT))</code>::来自<a href="https://github.com/opencv/opencv/blob/3.4.1/modules/core/include/opencv2/core/hal/interface.h#L80" rel="nofollow noreferrer">^{<cd22>}</a></li>
<li><code>(((0) & ((1 << CV_CN_SHIFT) - 1)) + (((1)-1) << CV_CN_SHIFT))</code>::来自<a href="https://github.com/opencv/opencv/blob/3.4.1/modules/core/include/opencv2/core/hal/interface.h#L69" rel="nofollow noreferrer">^{<cd24>}</a></li>
<li><code>(((0) & ((1 << 3) - 1)) + (((1)-1) << 3))</code>::来自<a href="https://github.com/opencv/opencv/blob/3.4.1/modules/core/include/opencv2/core/hal/interface.h#L68" rel="nofollow noreferrer">^{<cd26>}</a></li>
</ol></li>
</ul>
<hr/>
<p>我将尝试补充<a href="https://stackoverflow.com/users/2836621">@Mark Setchell</a>的答案,仅仅因为我很好奇,我想分享:)</p>
<p>如果您查看文档,<a href="https://docs.opencv.org/3.4.1/dd/d1a/group__imgproc__feature.html#ga47849c3be0d0406ad3ca45db65a25d2d" rel="nofollow noreferrer">^{<cd27>}</a>是<a href="https://docs.opencv.org/3.4.1/d7/dbd/group__imgproc.html" rel="nofollow noreferrer">imgproc</a>模块的一部分(位于<a href="https://docs.opencv.org/3.4.1/dd/d1a/group__imgproc__feature.html" rel="nofollow noreferrer">Feature Detection</a>“<em>子模块</em>”)下。文档说唯一实现的方法是<a href="https://docs.opencv.org/3.4.1/d7/dbd/group__imgproc.html#gga073687a5b96ac7a3ab5802eb5510fe65ab1bf00a90864db34b2f72fa76389931d" rel="nofollow noreferrer">HOUGH_GRADIENT</a>(又名<a href="https://docs.opencv.org/3.4.1/d0/de3/citelist.html#CITEREF_Yuen90" rel="nofollow noreferrer">21HT</a>,即两阶段Hough变换),他们指出参考文献“<a href="https://www.sciencedirect.com/science/article/pii/026288569090059E" rel="nofollow noreferrer">Comparative study of Hough Transform methods for circle finding</a>”(1990):)。如果由于付费墙而无法访问,则可以访问<a href="http://www.bmva.org/bmvc/1989/avc-89-029.pdf" rel="nofollow noreferrer">1989's version for free</a>)。本文作者评论道:</p>
<blockquote>
<p>The HT method of shape analysis uses a constraint equation relating points in a feature space to possible parameter values of the searched for shape. For each
feature point, <strong>invariably edge points</strong>, votes are accumulated for all parameter combinations which satisfy the constraint. [...]</p>
</blockquote>
<p>后来,他们写道:</p>
<blockquote>
<p><strong>If edge direction information is available</strong>, then one way to reduce the storage and computational demands of circle finding is to decompose the problem into two
stages [...]</p>
</blockquote>
<p>因此,如果你想坚持21小时,你基本上需要边和边方向信息。例如,您可以通过<code>Sobel</code>(例如,<code>dx</code>和<code>dy</code>)获得边缘方向信息,并使用这些已经计算的<code>dx</code>和{<cd30>}来使用<code>Canny</code>来获得边缘。事实上,这就是OpenCV实现所做的。如果导航到<a href="https://github.com/opencv/opencv/blob/3.4.1/modules/imgproc/src/hough.cpp" rel="nofollow noreferrer">^{<cd34>}</a>,则可以看到Sobel+Sobel+Canny操作<a href="https://github.com/opencv/opencv/blob/3.4.1/modules/imgproc/src/hough.cpp#L1569-L1571" rel="nofollow noreferrer">here</a>。在</p>
<p>那么,怎么了?好吧,这意味着如果你有其他方法(或者你想提出一个新的方法,为什么不呢?)它能够返回更适合您的情况的边和边方向信息(可能<em>颜色</em>在您的情况下有不同的含义),然后您可以用您的方法替换这3行(Sobel+Sobel+Canny),然后重用其余实现(酷,哈?)。如果你有灵感:),你可以看看“<a href="http://ai.stanford.edu/~ruzon/compass/color.html" rel="nofollow noreferrer">A Short History of Color Edge Detection</a>”,然后从那里开始。在</p>
<p>那么,为什么我们需要单通道输入?好吧,基本上是因为我们需要边缘,它们通常被表示为单通道图像。另外,<a href="https://github.com/opencv/opencv/blob/3.4.1/modules/imgproc/src/hough.cpp#L1080-L1193" rel="nofollow noreferrer">implementation</a>目前只支持单通道边缘和边缘方向信息。然而,这些概念中的大多数可以扩展到多通道输入。我认为,因为没有通用的解决方案(可能这些概念会根据具体情况而改变),而且很少有人会从中受益,所以到目前为止没有人愿意提供任何实现。在</p>
<p>抱歉回答太长了。我知道TL;DR“该方法需要单通道输入”就足够了。我很好奇,想分享一下</p>