Numpy聚类距离矢量化

2024-09-27 21:28:55 发布

您现在位置:Python中文网/ 问答频道 /正文

我使用sklearnkmeans对数据样本(400 k个样本,维度=205,200个集群)进行了聚类。

我想知道,对于每个集群,集群中心和集群最远样本之间的最大距离,以便了解集群的“大小”。 这是我的代码:

import numpy as np
import scipy.spatial.distance as spd
diam = np.empty([200])
for i in range(200):
    diam[i] = spd.cdist(seed[np.newaxis, i, 1:], data[data[:, 0]==i][:,1:]).max()

“种子”是群集中心(200x206)。“seed”的第一列包含集群中的样本数(此处不相关)。

“数据”是样品(400kx206)。数据的第一列包含集群号。

问题:这是使用循环(不是这样的“numpy”)完成的。是否可以“矢量化”它?


Tags: 数据importnumpy距离dataasnp集群
3条回答

我们可以更聪明地编制索引,节省大约4倍的成本。在

首先,让我们构建一些正确形状的数据:

seed = np.random.randint(0, 100, (200,206))
data = np.random.randint(0, 100, (4e5,206))
seed[:, 0] = np.arange(200)
data[:, 0] = np.random.randint(0, 200, 4e5)
diam = np.empty(200)

原答案时间:

^{pr2}$

莫宁森的回答是:

%%timeit
seed_repeated = seed[data[:,0]]
dist_to_center = np.sqrt(np.sum((data[:,1:]-seed_repeated[:,1:])**2, axis=1))
diam = np.zeros(len(seed))
np.maximum.at(diam, data[:,0], dist_to_center)

1 loops, best of 3: 1.33 s per loop

分析者的回答是:

%%timeit
data_sorted = data[data[:, 0].argsort()]
seed_ext = np.repeat(seed,np.bincount(data_sorted[:,0]),axis=0)
dists = np.sqrt(((data_sorted[:,1:] - seed_ext[:,1:])**2).sum(1))
shift_idx = np.append(0,np.nonzero(np.diff(data_sorted[:,0]))[0]+1)
diam_out = np.maximum.reduceat(dists,shift_idx)

1 loops, best of 3: 1.65 s per loop

正如我们所看到的,除了更大的内存占用外,矢量化解决方案并没有真正获得任何好处。为了避免这种情况,我们需要回到原来的答案,这是做这些事情的正确方法,并尝试减少索引的数量:

%%timeit
idx = data[:,0].argsort()
bins = np.bincount(data[:,0])
counter = 0
for i in range(200):
    data_slice = idx[counter: counter+bins[i]]
    diam[i] = spd.cdist(seed[None, i, 1:], data[data_slice, 1:]).max()
    counter += bins[i]

1 loops, best of 3: 281 ms per loop

仔细检查答案:

np.allclose(diam, dam_out)
True

这就是假设python循环不好的问题。它们通常是,但并非在所有情况下。在

与@Divakar非常相似,但不必排序:

seed_repeated = seed[data[:,0]]
dist_to_center = np.sqrt(np.sum((data[:,1:]-seed_repeated[:,1:])**2, axis=1))

diam = np.zeros(len(seed))
np.maximum.at(diam, data[:,0], dist_to_center)

众所周知,ufunc.at速度很慢,所以看看哪个更快会很有趣。在

这里有一个矢量化方法-

# Sort data w.r.t. col-0
data_sorted = data[data[:, 0].argsort()]

# Get counts of unique tags in col-0 of data and repeat seed accordingly. 
# Thus, we would have an extended version of seed that matches data's shape.
seed_ext = np.repeat(seed,np.bincount(data_sorted[:,0]),axis=0)

# Get euclidean distances between extended seed version and sorted data
dists = np.sqrt(((data_sorted[:,1:] - seed_ext[:,1:])**2).sum(1))

# Get positions of shifts in col-0 of sorted data
shift_idx = np.append(0,np.nonzero(np.diff(data_sorted[:,0]))[0]+1)

# Final piece of puzzle is to get tag based maximum values from dists, 
# where each tag is unique number in col-0 of data
diam_out = np.maximum.reduceat(dists,shift_idx)

运行时测试并验证输出-

定义函数:

^{pr2}$

验证输出:

In [417]: # Inputs
     ...: seed = np.random.rand(20,20)
     ...: data = np.random.randint(0,20,(40000,20))
     ...: 

In [418]: np.allclose(loopy_cdist(seed,data),vectorized_repeat_reduceat(seed,data))
Out[418]: True

In [419]: np.allclose(loopy_cdist(seed,data),vectorized_indexing_maxat(seed,data))
Out[419]: True

运行时:

In [420]: %timeit loopy_cdist(seed,data)
10 loops, best of 3: 35.9 ms per loop

In [421]: %timeit vectorized_repeat_reduceat(seed,data)
10 loops, best of 3: 28.9 ms per loop

In [422]: %timeit vectorized_indexing_maxat(seed,data)
10 loops, best of 3: 24.1 ms per loop

相关问题 更多 >

    热门问题