<p>此解决方案使用<a href="https://stackoverflow.com/questions/26586123/filling-gaps-in-shape-edges/">this approach</a>的Python实现。其思想是使用一个特殊的内核来卷积图像,该内核标识一行中的起点/终点。以下是步骤:</p>
<ol>
<li><strong>调整图像大小</strong>稍微调整一下,因为图像太大了</李>
<li>将图像转换为灰度</li>
<li>获取骨架</li>
<li><strong>将骨架与端点内核进行卷积</li>
<li>获取端点的<strong>坐标</li>
</ol>
<p>现在,这将是所提出算法的第一次迭代。但是,根据输入图像的不同,可能存在重复的端点-彼此太近的单个点,可以连接。因此,让我们合并一些额外的处理来消除这些重复点</p>
<ol start=“6”>
<li><strong>识别</strong>可能的重复点</li>
<li><strong>连接重复的点</li>
<li><strong>计算</strong>终点</li>
</ol>
<p>这些最后的步骤太笼统了,让我进一步阐述一下当我们达到这一步时消除重复的想法。让我们看看第一部分的代码:</p>
<pre><code># imports:
import cv2
import numpy as np
# image path
path = "D://opencvImages//"
fileName = "hJVBX.jpg"
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Resize image:
scalePercent = 50 # percent of original size
width = int(inputImage.shape[1] * scalePercent / 100)
height = int(inputImage.shape[0] * scalePercent / 100)
# New dimensions:
dim = (width, height)
# resize image
resizedImage = cv2.resize(inputImage, dim, interpolation=cv2.INTER_AREA)
# Color conversion
grayscaleImage = cv2.cvtColor(resizedImage, cv2.COLOR_BGR2GRAY)
grayscaleImage = 255 - grayscaleImage
</code></pre>
<p>到目前为止,我已经调整了图像的大小(到原始比例的<code>0.5</code>),并将其转换为灰度(实际上是一个反转的二值图像)。现在,检测端点的第一步是将线<code>width</code>规范化为<code>1 pixel</code>。这是通过计算<code>skeleton</code>来实现的,可以使用OpenCV的扩展图像处理模块</em>来实现:</p>
<pre><code># Compute the skeleton:
skeleton = cv2.ximgproc.thinning(grayscaleImage, None, 1)
</code></pre>
<p>这是骨架:</p>
<img src="https://i.imgur.com/UZ4OpDK.png" width="400"/>
<p>现在,让我们运行端点检测部分:</p>
<pre><code># Threshold the image so that white pixels get a value of 0 and
# black pixels a value of 10:
_, binaryImage = cv2.threshold(skeleton, 128, 10, cv2.THRESH_BINARY)
# Set the end-points kernel:
h = np.array([[1, 1, 1],
[1, 10, 1],
[1, 1, 1]])
# Convolve the image with the kernel:
imgFiltered = cv2.filter2D(binaryImage, -1, h)
# Extract only the end-points pixels, those with
# an intensity value of 110:
endPointsMask = np.where(imgFiltered == 110, 255, 0)
# The above operation converted the image to 32-bit float,
# convert back to 8-bit uint
endPointsMask = endPointsMask.astype(np.uint8)
</code></pre>
<p>查看原始链接以了解有关此方法的信息,但一般的要点是,内核是这样的:作为邻域求和的结果,与一行中的端点进行卷积将产生<code>110</code>的值。涉及到<code>float</code>操作,因此必须小心数据类型和转换。可在此处观察该程序的结果:</p>
<img src="https://i.imgur.com/9wnZDmr.png" width="400"/>
<p>但是,请注意,这些是端点,如果它们太近,则可以连接一些点。现在是重复消除步骤。让我们首先定义检查点是否重复的标准。如果点太近,我们将加入他们。让我们提出一种基于形态学的点接近方法。我将用大小为<code>3</code>和<code>3</code>的<code>rectangular kernel</code>迭代来扩展</strong>端点掩码。如果两个或多个点太近,它们的膨胀将产生一个大的、独特的斑点:</p>
<pre><code># RGB copy of this:
rgbMask = endPointsMask.copy()
rgbMask = cv2.cvtColor(rgbMask, cv2.COLOR_GRAY2BGR)
# Create a copy of the mask for points processing:
groupsMask = endPointsMask.copy()
# Set kernel (structuring element) size:
kernelSize = 3
# Set operation iterations:
opIterations = 3
# Get the structuring element:
maxKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
# Perform dilate:
groupsMask = cv2.morphologyEx(groupsMask, cv2.MORPH_DILATE, maxKernel, None, None, opIterations, cv2.BORDER_REFLECT101)
</code></pre>
<p>这是扩张的结果。我将此图像称为<code>groupsMask</code>:</p>
<img src="https://i.imgur.com/SYnxJBK.png" width="400"/>
<p>请注意,有些点现在是如何共享邻接的。我将使用此遮罩作为生成最终质心的指南。算法是这样的:通过<code>endPointsMask</code>循环,为每个点生成一个标签。使用<code>dictionary</code>存储标签和共享该标签的所有质心-使用<code>groupsMask</code>通过<code>flood-filling</code>在不同点之间传播标签。在<code>dictionary</code>中,我们将存储质心簇标签、质心总和的累积以及累积的质心数量的计数,这样我们可以生成最终平均值。像这样:</p>
<pre><code># Set the centroids Dictionary:
centroidsDictionary = {}
# Get centroids on the end points mask:
totalComponents, output, stats, centroids = cv2.connectedComponentsWithStats(endPointsMask, connectivity=8)
# Count the blob labels with this:
labelCounter = 1
# Loop through the centroids, skipping the background (0):
for c in range(1, len(centroids), 1):
# Get the current centroids:
cx = int(centroids[c][0])
cy = int(centroids[c][1])
# Get the pixel value on the groups mask:
pixelValue = groupsMask[cy, cx]
# If new value (255) there's no entry in the dictionary
# Process a new key and value:
if pixelValue == 255:
# New key and values-> Centroid and Point Count:
centroidsDictionary[labelCounter] = (cx, cy, 1)
# Flood fill at centroid:
cv2.floodFill(groupsMask, mask=None, seedPoint=(cx, cy), newVal=labelCounter)
labelCounter += 1
# Else, the label already exists and we must accumulate the
# centroid and its count:
else:
# Get Value:
(accumCx, accumCy, blobCount) = centroidsDictionary[pixelValue]
# Accumulate value:
accumCx = accumCx + cx
accumCy = accumCy + cy
blobCount += 1
# Update dictionary entry:
centroidsDictionary[pixelValue] = (accumCx, accumCy, blobCount)
</code></pre>
<p>这里有一些程序的动画,首先,一个接一个地处理质心。我们正试图将这些似乎彼此接近的点连接起来:</p>
<img src="https://i.imgur.com/rRNOrp4.gif" width="400"/>
<p>正在使用新标签填充的组掩码。将共享标签的点添加在一起以生成最终平均点。这有点难看,因为我的标签从<code>1</code>开始,但您几乎看不到正在填充的标签:</p>
<img src="https://i.imgur.com/FIocPRX.gif" width="400"/>
<p>现在,剩下的就是得出最后的结论。循环浏览字典,检查质心及其计数。如果计数大于<code>1</code>,则质心表示累加,必须除以其计数要得出最后一点:</p>
<pre><code># Loop trough the dictionary and get the final centroid values:
for k in centroidsDictionary:
# Get the value of the current key:
(cx, cy, count) = centroidsDictionary[k]
# Process combined points:
if count != 1:
cx = int(cx/count)
cy = int(cy/count)
# Draw circle at the centroid
cv2.circle(resizedImage, (cx, cy), 5, (0, 0, 255), -1)
cv2.imshow("Final Centroids", resizedImage)
cv2.waitKey(0)
</code></pre>
<p>这是最终图像,显示线条的端点/起点:</p>
<img src="https://i.imgur.com/aXdmuOS.png" width="400"/>
<p>现在,端点检测方法,或者更确切地说,卷积步骤,在曲线上产生了一个明显的额外点,这可能是因为直线上的一段与其邻域过于分离——将曲线分成两部分。也许在卷积之前应用一点形态学可以解决这个问题</p>