列中的dask计数/频率项

2024-10-06 11:25:05 发布

您现在位置:Python中文网/ 问答频道 /正文

我有一个非常大的数据集(>;1000万行)。下面是一个5行的小例子,我可以让Pandas在一个列中对某些给定的术语进行计数,列中有术语列表。对于运行熊猫的单核机器来说,一切都很好。我得到了预期的结果(10行)。但是,在同一个小数据集(我在这里展示)上,有5行,当使用Dask进行实验时,计算出来的数据超过10行(基于分区的数量)。这是密码。如果有人能给我指路,我会很感激的。你知道吗

熊猫实施:

def compute_total(df, term_list, cap_list):
    terms_counter = Counter(chain.from_iterable(df['Terms']))
    terms_series = pd.Series(terms_counter)
    terms_df = pd.DataFrame({'Term': terms_series.index, 'Count': terms_series.values})
    df1 = terms_df[terms_df['Term'].isin(term_list)]
    product_terms = product(term_list, cap_list)
    df_cp = pd.DataFrame(product_terms, columns=['Terms', 'Capability'])
    tjt_df = df_cp.set_index('Terms').combine_first(df1.set_index('Term')).reset_index()
    tjt_df.rename(columns={'index': 'Term'}, inplace=True)
    tjt_df['Count'] = tjt_df['Count'].fillna(0.0)  # convert all NaN to 0.0
    return tjt_df


d = {'Title': {0: 'IRC do consider this.',
               1: 'we’re simply taking screenshot',
               2: 'Why does irc select topics?',
               3: 'Is this really a screenshot?',
               4: 'how irc is doing this?'},
     'Terms': {0: ['tech', 'channel', 'tech'],
               1: ['channel', 'findwindow', 'Italy', 'findwindow'],
               2: ['Detroit', 'topic', 'seats', 'topic'],
               3: ['tech', 'topic', 'printwindow', 'Boston', 'window'],
               4: ['privmsg', 'wheel', 'privmsg']}}

df = pd.DataFrame.from_dict(d)
term_list = ['channel', 'topic', 'findwindow', 'printwindow', 'privmsg']
cap_list = ['irc', 'screenshot']

熊猫产量:

          Term  Capability  Count
0      channel         irc  2.0
1      channel  screenshot  2.0
2   findwindow         irc  2.0
3   findwindow  screenshot  2.0
4  printwindow         irc  1.0
5  printwindow  screenshot  1.0
6      privmsg         irc  2.0
7      privmsg  screenshot  2.0
8        topic         irc  3.0
9        topic  screenshot  3.0

Dask实现:

注意:对于npartition,我尝试了num_cores=1,得到了预期的结果。如果我将num\u cores更改为大于1的任何值,就会得到我不理解的结果。例如:当num_cores=2时,得到的df有20行(好的……我知道了)。当num_cores=3或4时,我仍然得到20行。当num_cores=5…16时,我得到40行!没有尝试更多。。。你知道吗

num_cores = 8
ddf = dd.from_pandas(df, npartitions=num_cores * 1)
meta = make_meta({'Term': 'U', 'Capability': 'U', 'Count': 'i8'}, index=pd.Index([], 'i8'))
count_df = ddf.map_partitions(compute_total, term_list, cap_list, meta=meta).compute(scheduler='processes')
print(count_df)
print(count_df.shape)

Dask输出:

          Term  Capability  Count
0      channel         irc    1.0
1      channel  screenshot    1.0
2   findwindow         irc    0.0
3   findwindow  screenshot    0.0
4  printwindow         irc    0.0
5  printwindow  screenshot    0.0
6      privmsg         irc    0.0
7      privmsg  screenshot    0.0
8        topic         irc    0.0
9        topic  screenshot    0.0
0      channel         irc    1.0
1      channel  screenshot    1.0
2   findwindow         irc    2.0
3   findwindow  screenshot    2.0
4  printwindow         irc    0.0
5  printwindow  screenshot    0.0
6      privmsg         irc    0.0
7      privmsg  screenshot    0.0
8        topic         irc    0.0
9        topic  screenshot    0.0
0      channel         irc    0.0
1      channel  screenshot    0.0
2   findwindow         irc    0.0
3   findwindow  screenshot    0.0
4  printwindow         irc    0.0
5  printwindow  screenshot    0.0
6      privmsg         irc    0.0
7      privmsg  screenshot    0.0
8        topic         irc    2.0
9        topic  screenshot    2.0
0      channel         irc    0.0
1      channel  screenshot    0.0
2   findwindow         irc    0.0
3   findwindow  screenshot    0.0
4  printwindow         irc    1.0
5  printwindow  screenshot    1.0
6      privmsg         irc    2.0
7      privmsg  screenshot    2.0
8        topic         irc    1.0
9        topic  screenshot    1.0
(40, 3)

观察:在看了这个相当长的结果数据帧之后,我想我可以对它做最后一次计算来得到我想要的。只需按术语、能力和总和分组。我会得到预期的结果。你知道吗

df1 = df.groupby(['Term', 'Capability'])['Count'].sum()

但是,我想知道,这是否可以用Dask以一种干净的方式来实现。我知道这个问题不是一个“令人尴尬的平行”问题,也就是说,一个人需要对整个数据集有一个全局视图才能获得计数。所以,我必须以一种“map->;reduce”的方式来处理它,我现在正在这样做。有没有更干净的方法?你知道吗


Tags: dftopicindexirccountchannelcoresnum