大数据集Hausdorff距离的快速求解

2024-09-28 16:21:24 发布

您现在位置:Python中文网/ 问答频道 /正文

数据集中的行数是500000+。我需要每个id之间的Hausdorff距离。对整个数据集重复

我有一个庞大的数据集。以下是小部分:

df = 

id_easy ordinal latitude    longitude            epoch  day_of_week
0   aaa     1.0  22.0701       2.6685   01-01-11 07:45       Friday
1   aaa     2.0  22.0716       2.6695   01-01-11 07:45       Friday
2   aaa     3.0  22.0722       2.6696   01-01-11 07:46       Friday
3   bbb     1.0  22.1166       2.6898   01-01-11 07:58       Friday
4   bbb     2.0  22.1162       2.6951   01-01-11 07:59       Friday
5   ccc     1.0  22.1166       2.6898   01-01-11 07:58       Friday
6   ccc     2.0  22.1162       2.6951   01-01-11 07:59       Friday

我要计算Haudorff Distance

^{pr2}$

输出是0.05114626086039758


现在我要计算整个数据集的距离。对于所有id_easys,期望的输出是对角线上有0的矩阵(因为aaa和{}之间的距离是0):

     aaa      bbb    ccc
aaa    0  0.05114   ...
bbb    ...   0
ccc             0

Tags: 数据id距离dfeasybbbcccday
3条回答

首先,我定义了一个提供一些样本数据的方法。如果你在问题中提供类似的东西,会容易得多。在大多数与性能相关的问题中,需要实际问题的大小来找到一个最优的解决方案。在

在下面的答案中,我将假定id_easy的平均大小为17,并且有30000个不同的id,这导致数据集大小为510u 000。在

创建样本数据

import numpy as np
import numba as nb

N_ids=30_000
av_id_size=17

#create_data (pre sorting according to id assumed)
lat_lon=np.random.rand(N_ids*av_id_size,2)

#create_ids (sorted array with ids)
ids=np.empty(N_ids*av_id_size,dtype=np.int64)
ind=0
for i in range(N_ids):
    for j in range(av_id_size):
        ids[i*av_id_size+j]=ind
    ind+=1

Hausdorff函数

下面的函数是来自scipy源代码的稍微修改的版本。 进行了以下修改:

  • 对于非常小的输入数组,我注释掉了洗牌部分(在更大的数组上启用洗牌,并在实际数据上尝试什么是最好的
  • 至少在Windows上,Anaconda scipy函数看起来有一些性能问题(比Linux慢得多),基于LLVM的Numba看起来是一致的
  • 移除Hausdorff对的指数
  • (N,2)情况下展开的距离环

    #Modified Code from Scipy-source
    #https://github.com/scipy/scipy/blob/master/scipy/spatial/_hausdorff.pyx
    #Copyright (C)  Tyler Reddy, Richard Gowers, and Max Linke, 2016
    #Copyright © 2001, 2002 Enthought, Inc.
    #All rights reserved.
    
    #Copyright © 2003-2013 SciPy Developers.
    #All rights reserved.
    
    #Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
    #Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
    #Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following 
    #disclaimer in the documentation and/or other materials provided with the distribution.
    #Neither the name of Enthought nor the names of the SciPy Developers may be used to endorse or promote products derived 
    #from this software without specific prior written permission.
    
    #THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, 
    #BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. 
    #IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, 
    #OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; 
    #OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 
    #(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
    @nb.njit()
    def directed_hausdorff_nb(ar1, ar2):
        N1 = ar1.shape[0]
        N2 = ar2.shape[0]
        data_dims = ar1.shape[1]
    
        # Shuffling for very small arrays disbabled
        # Enable it for larger arrays
        #resort1 = np.arange(N1)
        #resort2 = np.arange(N2)
        #np.random.shuffle(resort1)
        #np.random.shuffle(resort2)
    
        #ar1 = ar1[resort1]
        #ar2 = ar2[resort2]
    
        cmax = 0
        for i in range(N1):
            no_break_occurred = True
            cmin = np.inf
            for j in range(N2):
                # faster performance with square of distance
                # avoid sqrt until very end
                # Simplificaten (loop unrolling) for (n,2) arrays
                d = (ar1[i, 0] - ar2[j, 0])**2+(ar1[i, 1] - ar2[j, 1])**2
                if d < cmax: # break out of `for j` loop
                    no_break_occurred = False
                    break
    
                if d < cmin: # always true on first iteration of for-j loop
                    cmin = d
    
            # always true on first iteration of for-j loop, after that only
            # if d >= cmax
            if cmin != np.inf and cmin > cmax and no_break_occurred == True:
                cmax = cmin
    
        return np.sqrt(cmax)
    

计算子集上的Hausdorff距离

@nb.njit(parallel=True)
def get_distance_mat(def_slice,lat_lon):
    Num_ids=def_slice.shape[0]-1
    out=np.empty((Num_ids,Num_ids),dtype=np.float64)
    for i in nb.prange(Num_ids):
        ar1=lat_lon[def_slice[i:i+1],:]
        for j in range(i,Num_ids):
            ar2=lat_lon[def_slice[j:j+1],:]
            dist=directed_hausdorff_nb(ar1, ar2)
            out[i,j]=dist
            out[j,i]=dist
    return out

示例和计时

#def_slice defines the start and end of the slices
_,def_slice=np.unique(ids,return_index=True)
def_slice=np.append(def_slice,ids.shape[0])

%timeit res_1=get_distance_mat(def_slice,lat_lon)
#1min 2s ± 301 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

你说的是计算500000^2+距离。如果你每秒计算1000个这样的距离,你需要7.93年来完成你的矩阵。我不确定Hausdorff distance是否是对称的,但即使是对称的,也只能节省2倍(3.96年)。在

这个矩阵还需要大约1兆字节的内存。在

我建议只在需要的时候计算这个值,或者如果你真的需要整个矩阵,你需要并行计算。好的一面是,这个问题很容易解决。例如,对于四个核心,您可以将问题拆分为(在伪代码中):

n = len(u)
m = len(v)
A = hausdorff_distance_matrix(u[:n], v[:m])
B = hausdorff_distance_matrix(u[:n], v[m:])
C = hausdorff_distance_matrix(u[n:], v[:m])
D = hausdorff_distance_matrix(u[n:], v[m:])
results = [[A, B],
           [C, D]]

其中hausdorff_distance_matrix(u, v)返回u和{}之间的所有距离组合。不过,你可能需要把它分成四个以上的部分。在

什么是应用程序?你能不能只根据需要计算这些数据?在

尝试使用scipy中的computed distance

from scipy.spatial.distance import cdist

hausdorff_distance = cdist(df[['latitude', 'longitude']], df[['latitude', 'longitude']], lambda u, v: directed_hausdorff(u, v)[0])

hausdorff_distance_df  = pd.DataFrame(hausdorff_distance)

不过,值得注意的是,无论您最终使用什么方法,都需要花费大量的时间来计算,这仅仅是因为数据量太大。问问你自己,如果你真的需要每一对距离。在

实际上,这类问题是通过将配对的数量限制在一个可管理的数量来解决的。例如,将数据帧分成更小的集合,每个集合限制在一个地理区域内,然后找到该地理区域内的距离对。在

超级市场使用上述方法来确定新店的位置。他们并不是在计算他们拥有的每一家商店和他们的竞争对手之间的距离。首先,他们限制了区域,那里总共只有5-10家商店,然后才开始计算距离。在

相关问题 更多 >