从另一个Datafram计算日期之间(给定ID值)的出现次数

2024-09-26 22:52:31 发布

您现在位置:Python中文网/ 问答频道 /正文

Pandas: select DF rows based on another DF是我能找到的最接近我的问题的答案,但我不相信它完全解决了这个问题。你知道吗

无论如何,我正在处理两个非常大的数据帧(所以速度是一个考虑因素),dfïu电子邮件和dfïu旅行,它们都已经按CustID排序,然后按日期排序。你知道吗

df\u emails包括我们向客户发送电子邮件的日期,如下所示:

   CustID   DateSent
0       2 2018-01-20
1       2 2018-02-19
2       2 2018-03-31
3       4 2018-01-10
4       4 2018-02-26
5       5 2018-02-01
6       5 2018-02-07

dfïtrips包括顾客来店的日期和消费金额,如下所示:

   CustID   TripDate  TotalSpend
0       2 2018-02-04          25
1       2 2018-02-16         100
2       2 2018-02-22         250
3       4 2018-01-03          50
4       4 2018-02-28         100
5       4 2018-03-21         100
6       8 2018-01-07         200

基本上,我需要做的是找到每一个客户在每一封电子邮件发送之间的旅行次数和总花费。如果是最后一次为某个客户发送电子邮件,我需要在电子邮件发送之后,但在数据结束之前(2018-04-01),找到总的旅行次数和总花费。所以最后的数据帧是这样的:

   CustID   DateSent NextDateSentOrEndOfData  TripsBetween  TotalSpendBetween
0       2 2018-01-20              2018-02-19           2.0              125.0
1       2 2018-02-19              2018-03-31           1.0              250.0
2       2 2018-03-31              2018-04-01           0.0                0.0
3       4 2018-01-10              2018-02-26           0.0                0.0
4       4 2018-02-26              2018-04-01           2.0              200.0
5       5 2018-02-01              2018-02-07           0.0                0.0
6       5 2018-02-07              2018-04-01           0.0                0.0

尽管我已经尽了最大努力以Python/Pandas友好的方式实现了这一点,但我能够实现的唯一准确的解决方案是通过np.哪里移动和循环。解决方案如下所示:

df_emails["CustNthVisit"] = df_emails.groupby("CustID").cumcount()+1

df_emails["CustTotalVisit"] = df_emails.groupby("CustID")["CustID"].transform('count')

df_emails["NextDateSentOrEndOfData"] = pd.to_datetime(df_emails["DateSent"].shift(-1)).where(df_emails["CustNthVisit"] != df_emails["CustTotalVisit"], pd.to_datetime('04-01-2018'))

for i in df_emails.index:
    df_emails.at[i, "TripsBetween"] = len(df_trips[(df_trips["CustID"] == df_emails.at[i, "CustID"]) & (df_trips["TripDate"] > df_emails.at[i,"DateSent"]) & (df_trips["TripDate"] < df_emails.at[i,"NextDateSentOrEndOfData"])])

for i in df_emails.index:
    df_emails.at[i, "TotalSpendBetween"] = df_trips[(df_trips["CustID"] == df_emails.at[i, "CustID"]) & (df_trips["TripDate"] > df_emails.at[i,"DateSent"]) & (df_trips["TripDate"] < df_emails.at[i,"NextDateSentOrEndOfData"])].TotalSpend.sum()

df_emails.drop(['CustNthVisit',"CustTotalVisit"], axis=1, inplace=True)

然而,一个%%timeit已经揭示,仅在上面显示的七行上就需要10.6ms,这使得这个解决方案在我的实际数据集(大约1000000行)上几乎不可行。有没有人知道一个更快更可行的解决方案?你知道吗


Tags: 数据pandasdf客户电子邮件解决方案atemails
2条回答

在电子邮件中添加下一个日期列

df_emails["NextDateSent"] = df_emails.groupby("CustID").shift(-1)

merge_asof排序,然后合并到最近的以创建行程查找表

df_emails = df_emails.sort_values("DateSent")
df_trips = df_trips.sort_values("TripDate")
df_lookup = pd.merge_asof(df_trips, df_emails, by="CustID", left_on="TripDate",right_on="DateSent", direction="backward")

为所需数据聚合查找表。你知道吗

df_lookup = df_lookup.loc[:, ["CustID", "DateSent", "TotalSpend"]].groupby(["CustID", "DateSent"]).agg(["count","sum"])

左键将其连接回电子邮件表。你知道吗

df_merge = df_emails.join(df_lookup, on=["CustID", "DateSent"]).sort_values("CustID")

我之所以选择将nan保留为nan,是因为我不喜欢填充默认值(如果愿意,您可以稍后再填充,但如果您提前设置默认值,则无法轻松区分已存在的事物和未存在的事物)

   CustID   DateSent NextDateSent  (TotalSpend, count)  (TotalSpend, sum)
0       2 2018-01-20   2018-02-19                  2.0              125.0
1       2 2018-02-19   2018-03-31                  1.0              250.0
2       2 2018-03-31          NaT                  NaN                NaN
3       4 2018-01-10   2018-02-26                  NaN                NaN
4       4 2018-02-26          NaT                  2.0              200.0
5       5 2018-02-01   2018-02-07                  NaN                NaN
6       5 2018-02-07          NaT                  NaN                NaN

如果我能够处理merge_asof,这将是一个简单的max_date案例,因此我做了很多工作:

max_date = pd.to_datetime('2018-04-01')

# set_index for easy extraction by id
df_emails.set_index('CustID', inplace=True)

# we want this later in the final output
df_emails['NextDateSentOrEndOfData'] = df_emails.groupby('CustID').shift(-1).fillna(max_date)

# cuts function for groupby
def cuts(df):
    custID = df.CustID.iloc[0]
    bins=list(df_emails.loc[[custID], 'DateSent']) + [max_date]
    return pd.cut(df.TripDate, bins=bins, right=False)

# bin the dates:
s = df_trips.groupby('CustID', as_index=False, group_keys=False).apply(cuts)

# aggregate the info:
new_df = (df_trips.groupby([df_trips.CustID, s])
                  .TotalSpend.agg(['sum', 'size'])
                  .reset_index()
         )

# get the right limit:
new_df['NextDateSentOrEndOfData'] = new_df.TripDate.apply(lambda x: x.right)

# drop the unnecessary info
new_df.drop('TripDate', axis=1, inplace=True)

# merge:
df_emails.reset_index().merge(new_df, 
                on=['CustID','NextDateSentOrEndOfData'],
                              how='left'
                )

输出:

   CustID   DateSent NextDateSentOrEndOfData    sum  size
0       2 2018-01-20              2018-02-19  125.0   2.0
1       2 2018-02-19              2018-03-31  250.0   1.0
2       2 2018-03-31              2018-04-01    NaN   NaN
3       4 2018-01-10              2018-02-26    NaN   NaN
4       4 2018-02-26              2018-04-01  200.0   2.0
5       5 2018-02-01              2018-02-07    NaN   NaN
6       5 2018-02-07              2018-04-01    NaN   NaN

相关问题 更多 >

    热门问题