如何创建与大多数列匹配的列?

2024-09-30 01:22:27 发布

您现在位置:Python中文网/ 问答频道 /正文

我有一个金融交易的大数据帧(150000 x 25)。许多(并非所有这些交易)在以后的日期被撤销。我想创建一个新列,用于标识交易何时被撤销

我已经针对类别、类型和源列尝试了一些函数,如drop_duplicates()和duplicated(),但无法缩小解决方案的范围。有什么建议吗

import pandas as pd

d_in = {'key' : ['81371453', '93045710', '22123452', '18233745', '84933451', '95832374', '20283456', '20239485', '95843745'], 
        'date' : ['20200901', '20200901', '20200902', '20200902', '20200902','20200903', '20200904', '20200905', '20200905'],
        'category' : ['Z293', 'B993', 'Z293', 'B993', 'W884', 'C123', 'V332', 'C123', 'V332'], 
        'type' : ['tools', 'supplies', 'tools', 'supplies', 'repairs', 'custom', 'misc', 'custom', 'misc'], 
        'source' : ['Q112', 'E443', 'Q112', 'E443', 'P443', 'B334', 'E449', 'B334', 'E449'], 
        'amount' : [123.21, 3.12, -123.21, -3.12, 9312.00, 312.23, -13.23, -312.23, 13.23]}

df_in = pd.DataFrame(data=d_in)


d_out = {'key' : ['81371453', '93045710', '22123452', '18233745', '84933451', '95832374', '20283456', '20239485', '95843745'], 
    'date' : ['20200901', '20200901', '20200902', '20200902', '20200902','20200903', '20200904', '20200905', '20200905'],
    'category' : ['Z293', 'B993', 'Z293', 'B993', 'W884', 'C123', 'V332', 'C123', 'V332'], 
    'type' : ['tools', 'supplies', 'tools', 'supplies', 'repairs', 'custom', 'misc', 'custom', 'misc'], 
    'source' : ['Q112', 'E443', 'Q112', 'E443', 'P443', 'B334', 'E449', 'B334', 'E449'], 
    'amount' : [123.21, 3.12, -123.21, -3.12, 9312.00, 312.23, -13.23, -312.23, 13.23], 
    'reversed' : ['20200902', '20200902', '20200901', '20200901', 'none', '20200905', '20200905', '20200903', '20200904']}

df_out = pd.DataFrame(data=d_out)
    

Tags: incustomouttoolsmiscpdc123supplies
3条回答

让我们试试transformreversed([::-1])

s = df.groupby(['category','type','source'])['amount'].transform('sum')
date = df.groupby(['category','type','source'],sort=False)['date'].transform(lambda x : x.tolist()[::-1])
df.loc[s.eq(0), 'reversed'] = date
df
Out[95]: 
        key      date category      type source   amount  reversed
0  81371453  20200901     Z293     tools   Q112   123.21  20200902
1  93045710  20200901     B993  supplies   E443     3.12  20200902
2  22123452  20200902     Z293     tools   Q112  -123.21  20200901
3  18233745  20200902     B993  supplies   E443    -3.12  20200901
4  84933451  20200902     W884   repairs   P443  9312.00       NaN
5  95832374  20200903     C123    custom   B334   312.23  20200905
6  20283456  20200904     V332      misc   E449   -13.23  20200905
7  20239485  20200905     C123    custom   B334  -312.23  20200903
8  95843745  20200905     V332      misc   E449    13.23  20200904

您可以找到每个组的最大和最小日期,并将最小值设置为最大值,将最大值设置为最小值。如果每个组只有一个日期,则将其设置为None

gb = df_in.groupby(['category', 'type', 'source'])
maxx = gb['date'].transform(max)
minn = gb['date'].transform(min)
df_in['reversed'] = np.where(df_in['date'] == maxx, minn, maxx)
df_in['reversed'] = df_in['reversed'].mask(gb['date'].transform('size') == 1, None)
df_in
Out[1]: 
        key      date category      type source   amount  reversed
0  81371453  20200901     Z293     tools   Q112   123.21  20200902
1  93045710  20200901     B993  supplies   E443     3.12  20200902
2  22123452  20200902     Z293     tools   Q112  -123.21  20200901
3  18233745  20200902     B993  supplies   E443    -3.12  20200901
4  84933451  20200902     W884   repairs   P443  9312.00      None
5  95832374  20200903     C123    custom   B334   312.23  20200905
6  20283456  20200904     V332      misc   E449   -13.23  20200905
7  20239485  20200905     C123    custom   B334  -312.23  20200903
8  95843745  20200905     V332      misc   E449    13.23  20200904

@David Erickson有一个很好的答案,但是你可以通过对金额的绝对值进行分组来使其更加可靠。如果多个交易可以共享相同的类别、类型和来源,但支出金额不同,那么这将非常有用

df_in['amount_abs'] = df_in['amount'].abs()
gb = df_in.groupby(['category', 'type', 'source', 'amount_abs'])
maxx = gb['date'].transform(max)
minn = gb['date'].transform(min)
df_in['reversed'] = np.where(df_in['date'] == maxx, minn, maxx)
df_in['reversed'] = df_in['reversed'].mask(gb['date'].transform('size') == 1, None)
df_in = df_in.drop('amount_abs', axis=1)

相关问题 更多 >

    热门问题