Python张量访问是非常复杂的

2024-10-03 09:12:20 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在创建一个巨大的张量,它包含了数以百万计的单词三元组及其计数。例如,单词triple就是(word0, link, word1)。这些单词三元组收集在一个字典中,其中的值是它们各自的计数,例如(word0, link, word1): 15。想象一下,我有数百万这样的三倍。在我计算了这些事件之后,我尝试做其他的计算,这就是我的python脚本被卡住的地方。这里有一部分代码是永恒的:

big_tuple = covert_to_tuple(big_dict)
pdf = pd.DataFrame.from_records(big_tuple)
pdf.columns = ['word0', 'link', 'word1', 'counts']
total_cnts = pdf.counts.sum()

for _, row in pdf.iterrows():
    w0, link, w1 = row['word0'], row['link'], row['word1']
    w0w1_link = row.counts

    # very slow
    w0_link = pdf[(pdf.word0 == w0) & (pdf.link == link)]['counts'].sum()
    w1_link = pdf[(pdf.word1 == w1) & (pdf.link == link)]['counts'].sum()

    p_w0w1_link = w0w1_link / total_cnts
    p_w0_link = w0_link / total_cnts
    p_w1_link = w1_link / total_cnts
    new_score = log(p_w0w1_link / (p_w0_link * p_w1_link))
    big_dict[(w0, link, w1)] = new_score 

我分析了我的脚本,似乎下面两行

w0_link = pdf[(pdf.word0 == w0) & (pdf.link == link)]['counts'].sum()  
w1_link = pdf[(pdf.word1 == w1) & (pdf.link == link)]['counts'].sum() 

分别取49%49%计算时间的百分比。这些行试图找到(word0, link)(word1, link)的计数。所以,像这样访问pdf需要很多时间?我能做些什么来优化它吗?你知道吗


Tags: pdflink单词w1rowtotal计数sum
1条回答
网友
1楼 · 发布于 2024-10-03 09:12:20

请检查我的解决方案-我在计算中优化了一些东西(希望没有错误:)

# sample of data
df = pd.DataFrame({'word0': list('aabb'), 'link': list('llll'), 'word1': list('cdcd'),'counts': [10, 20, 30, 40]})

# caching total count
total_cnt = df['counts'].sum()

# two series with sums for all combinations of ('word0', 'link') and ('word1', 'link')
grouped_w0_l = df.groupby(['word0', 'link'])['counts'].sum()/total_cnt
grouped_w1_l = df.groupby(['word1', 'link'])['counts'].sum()/total_cnt

# join sums for grouped ('word0', 'link') to original df
merged_w0 = df.set_index(['word0', 'link']).join(grouped_w0_l, how='left', rsuffix='_w0').reset_index()

# join sums for grouped ('word1', 'link') to merged df
merged_w0_w1 = merged_w0.set_index(['word1', 'link']).join(grouped_w1_l, how='left', rsuffix='_w1').reset_index()

# merged_w0_w1 has enough data for calculation new_score
# check here - I transform the expression
merged_w0_w1['new_score'] = np.log(merged_w0_w1['counts'] * total_cnt / (merged_w0_w1['counts_w0'] * merged_w0_w1['counts_w1']))

# export results to dict (don't know is it really needed or not - you can continue manipulate data with dataframes)
big_dict = merged_w0_w1.set_index(['word0', 'link', 'word1'])['new_score'].to_dict()

新分数的表达式为

new_score = log(p_w0w1_link / (p_w0_link * p_w1_link))
        = log(w0w1_link / total_cnts / (w0_link / total_cnts * w0_link / total_cnts))
        = log(w0w1_link / total_cnts * (total_cnts * total_cnts / w0_link * w0_link))
        = log(w0w1_link * total_cnts / (w0_link * w0_link))

相关问题 更多 >