删除*几乎*重复的观测值

2024-05-20 14:09:30 发布

您现在位置:Python中文网/ 问答频道 /正文

我试图删除熊猫数据框架中的一些观察结果,其中相似性几乎为100%,但不完全相同。见下面的框架:

enter image description here

注意“约翰”、“玛丽”和“卫斯理”有着几乎相同的观察结果,但有一列是不同的。真实数据集有15列和215000多个观测值。在我可以目测验证的所有案例中,相似性都是一样的:在15列中,另一列每次最多匹配14列。出于项目的目的,我决定删除重复的观察结果(并将它们存储到另一个数据框中,以防我的老板要求查看它们)

我显然想到过remove_duplicates(keep='something'),但那是行不通的,因为观察结果并不完全相似。有没有人遇到过这样的问题?有什么补救办法吗


Tags: 数据项目目的框架相似性老板somethingremove
3条回答

这可以表示为所有记录之间的成对汉明距离计算,将低于某个阈值的后续记录对分离出来。幸运的是,numpy/scipy/sklearn已经完成了这项繁重的工作。我包含了两个产生相同输出的函数——一个是完全矢量化的(但消耗O(N^2)内存),另一个是消耗O(N)内存但仅沿单个维度矢量化的。在您的范围内,几乎可以肯定想要完全矢量化的版本-它可能会产生OOM错误。在这两种情况下,基本算法如下:

  • 将每个特征值编码为整数值(感谢sklearn!)
  • 对于所有行对,计算汉明距离(不同值之和)
  • 如果在threshold或低于汉明距离处发现两行,则丢弃后者,直到没有行保持低于该阈值

代码:

from sklearn.preprocessing import OrdinalEncoder
import pandas as pd
from scipy.spatial.distance import pdist, squareform
import numpy as np


def dedupe_fully_vectorized(df, threshold=1):
    """
    fully vectorized memory hog version - best not to use for n > 10k
    """
    # convert field data to integers
    enc = OrdinalEncoder()
    X = enc.fit_transform(df.to_numpy())

    # calc the (unnormalized) hamming distance for all row pairs
    d = pdist(X, metric="hamming") * df.shape[1]
    s = squareform(d)

    # s contains all pairs (j,k) and (k,j); exclude all pairs j < k as "duplicates"
    s[np.triu_indices_from(s)] = -1
    dupe_pair_matrix = (0 <= s) * (s <= threshold)

    df_dupes = df[np.any(dupe_pair_matrix, axis=1)]
    df_deduped = df.drop(df_dupes.index).sort_index()
    return (df_deduped, df_dupes)


def dedupe_partially_vectorized(df, threshold=1):
    """
    - Iterate through each row starting from the last; examine all previous rows for duplicates.  
    - If found, it is appended to a list of duplicate indices.
    """
    # convert field data to integers
    enc = OrdinalEncoder()
    X = enc.fit_transform(df.to_numpy())

    """
    - loop through each row, starting from last
    - for each `row`, calculate hamming distance to all previous rows
    - if any such distance is `threshold` or less, mark `idx` as duplicate
    - loop ends at 2nd row (1st is by definition not a duplicate)
    """
    dupe_idx = []          
    for j in range(len(X) - 1):
        idx = len(X) - j - 1
        row = X[idx]
        prev_rows = X[0:idx]
        dists = np.sum(row != prev_rows, axis=1)
        if min(dists) <= threshold:
            dupe_idx.append(idx)
        dupe_idx = sorted(dupe_idx)
    df_dupes = df.iloc[dupe_idx]
    df_deduped = df.drop(dupe_idx)
    return (df_deduped, df_dupes)

现在让我们来测试一下。首先进行健全性检查:

df = pd.DataFrame(
    [
        ["john", "doe", "m", 23],
        ["john", "dupe", "m", 23],
        ["jane", "doe", "f", 29],
        ["jane", "dole", "f", 28],
        ["jon", "dupe", "m", 23],
        ["tom", "donald", "m", 12],
        ["john", "dupe", "m", 65],
    ],
    columns=["first", "last", "s", "age"],
)


(df_deduped_fv, df_dupes_fv) = dedupe_fully_vectorized(df)
(df_deduped, df_dupes) = dedupe_partially_vectorized(df)

df_deduped_fv == df_deduped # True

# df_deduped
#   first    last  s  age
# 0  john     doe  m   23
# 2  jane     doe  f   29
# 3  jane    dole  f   28
# 5   tom  donald  m   12

# df_dupes
#   first  last  s  age
# 1  john  dupe  m   23
# 4   jon  dupe  m   23
# 6  john  dupe  m   65

我已经在多达40k行的数据帧(如下所示)上对此进行了测试,它似乎有效(这两种方法给出了相同的结果),但可能需要几秒钟的时间。我还没有按你的规模试过,但速度可能很慢:

arr = np.array("abcdefgh")
df = pd.DataFrame(np.random.choice(arr, (40000, 15))
# (df_deduped, df_dupes) = dedupe_partially_vectorized(df)

如果您可以避免进行所有成对比较,例如按名称分组,那么将显著提高性能

乐趣旁白/方法问题

您可能会注意到,您可以获得有趣的“汉明链”(我不知道这是否是一个术语),其中非常不同的记录由一个编辑差异记录链连接:

df_bad_news = pd.DataFrame(
    [
        ["john", "doe", "m", 88],
        ["jon", "doe", "m", 88],
        ["jan", "doe", "m", 88],
        ["jane", "doe", "m", 88],
        ["jane", "doe", "m", 12],
    ],
    columns=["first", "last", "s", "age"],
)


(df_deduped, df_dupes) = dedupe(df)

# df_deduped
#   first last  s  age
# 0  john  doe  m   88

# df_dupes
#   first last  s  age
# 1   jon  doe  m   88
# 2   jan  doe  m   88
# 3  jane  doe  m   88
# 4  jane  doe  m   12

如果有一个可以分组的字段(注释中提到name应该是相同的),那么性能将大大提高。这里两两计算在内存中为n^2。根据需要,可以用一些时间效率换取内存效率

在列的子集上进行简单循环怎么样:

import pandas as pd

df = pd.DataFrame(
        [
            ['John', 45, 85000, 'DC'],
            ['Netcha', 25, 48000, 'NYC'],
            ['Mary', 45, 85000, 'DC'],
            ['Wesley', 36, 72500, 'LA'],
            ['Porter', 22, 98750, 'Seattle'],
            ['John', 45, 105500, 'DC'],
            ['Mary', 28, 85000, 'DC'],
            ['Wesley', 36, 72500, 'Boston'],
        ], 
        columns=['Name', 'Age', 'Salary', 'City'])

cols = df.columns.tolist()
cols.remove('Name')

for col in cols:
    observed_cols = df.drop(col, axis=1).columns.tolist()
    df.drop_duplicates(observed_cols, keep='first', inplace=True)

print(df)

返回:

     Name  Age  Salary     City
0    John   45   85000       DC
1  Netcha   25   48000      NYC
2    Mary   45   85000       DC
3  Wesley   36   72500       LA
4  Porter   22   98750  Seattle

相关问题 更多 >