如何以向量化的方式计算给定距离内所有坐标的平均值

2024-09-26 18:00:59 发布

您现在位置:Python中文网/ 问答频道 /正文

我确实找到了计算一组点的中心坐标的方法。但是,当初始坐标的数目增加时,我的方法相当慢(我有大约10万个坐标)。在

瓶颈是代码中的for循环。我试着用np.沿_轴应用,但发现这不过是一个隐藏的python循环。在

是否有可能以矢量化的方式检测并平均出各种大小的过近点簇?在

import numpy as np
from scipy.spatial import cKDTree
np.random.seed(7)
max_distance=1

#Create random points
points = np.array([[1,1],[1,2],[2,1],[3,3],[3,4],[5,5],[8,8],[10,10],[8,6],[6,5]])

#Create trees and detect the points and neighbours which needs to be fused
tree = cKDTree(points)
rows_to_fuse = np.array(list(tree.query_pairs(r=max_distance))).astype('uint64')

#Split the points and neighbours into two groups
points_to_fuse = points[rows_to_fuse[:,0], :2]
neighbours = points[rows_to_fuse[:,1], :2]

#get unique points_to_fuse
nonduplicate_points = np.ascontiguousarray(points_to_fuse)
unique_points = np.unique(nonduplicate_points.view([('', nonduplicate_points.dtype)]\
                                                 *nonduplicate_points.shape[1]))
unique_points = unique_points.view(nonduplicate_points.dtype).reshape(\
                                          (unique_points.shape[0],\
                                           nonduplicate_points.shape[1]))
#Empty array to store fused points
fused_points = np.empty((len(unique_points), 2))

####BOTTLENECK LOOP####
for i, point in enumerate(unique_points):
    #Detect all locations where a unique point occurs
    locs=np.where(np.logical_and((points_to_fuse[:,0] == point[0]), (points_to_fuse[:,1]==point[1])))
    #Select all neighbours on these locations take the average
    fused_points[i,:] = (np.average(np.hstack((point[0],neighbours[locs,0][0]))),np.average(np.hstack((point[1],neighbours[locs,1][0]))))

#Get original points that didn't need to be fused
points_without_fuse = np.delete(points, np.unique(rows_to_fuse.reshape((1, -1))), axis=0)

#Stack result
points = np.row_stack((points_without_fuse, fused_points))

预期输出

^{pr2}$

编辑1:1个循环的示例,并获得所需结果

步骤1:为循环创建变量

#outside loop
points_to_fuse = np.array([[100,100],[101,101],[100,100]])
neighbours = np.array([[103,105],[109,701],[99,100]])
unique_points = np.array([[100,100],[101,101]])

#inside loop
point = np.array([100,100])
i = 0

步骤2:检测点到保险丝阵列中唯一点出现的所有位置

locs=np.where(np.logical_and((points_to_fuse[:,0] == point[0]), (points_to_fuse[:,1]==point[1])))
>>> (array([0, 2], dtype=int64),)

步骤3:在这些位置创建一个点和相邻点的数组,并计算平均值

array_of_points = np.column_stack((np.hstack((point[0],neighbours[locs,0][0])),np.hstack((point[1],neighbours[locs,1][0]))))
>>> array([[100, 100],
           [103, 105],
           [ 99, 100]])
fused_points[i, :] = np.average(array_of_points, 0)
>>> array([ 100.66666667,  101.66666667])

完整运行后的循环输出:

>>> print(fused_points)
>>> array([[ 100.66666667,  101.66666667],
           [ 105.        ,  401.        ]])

Tags: andtonparrayfusepointsrowspoint
1条回答
网友
1楼 · 发布于 2024-09-26 18:00:59

瓶颈不是循环,这是必要的,因为所有社区的大小不一样。在

陷阱是循环中的points_to_fuse[:,0] == point[0],它触发了二次复杂度。可以通过按索引对点进行排序来避免这种情况。在

一个这样做的例子,即使它不能解决整个问题(在生成rows_to_fuse之后):

sorter=np.lexsort(rows_to_fuse.T)
sorted_points=rows_to_fuse[sorter]
uniques,counts=np.unique(sorted_points[:,1],return_counts=True)
indices=counts.cumsum()
neighbourhood=np.split(sorted_points,indices)[:-1]
means=[(points[ne[:,0]].sum(axis=0)+points[ne[0,1]])/(len(ne)+1) \
for ne in neighbourhood] # a simple python loop.
# + manage unfused points.

另一个改进是使用numba来计算平均值,如果你想加速代码,但是我认为现在的复杂性是最佳的。在

相关问题 更多 >

    热门问题