Pyspark:如何从Pyspark.sql.dataframe.dataframe中选择唯一ID数据?

2024-09-30 03:24:45 发布

您现在位置:Python中文网/ 问答频道 /正文

我对{}语言和{}很陌生。我有一个pyspark.sql.dataframe.DataFrame如下所示:

df.show()
+--------------------+----+----+---------+----------+---------+----------+---------+
|                  ID|Code|bool|      lat|       lon|       v1|        v2|       v3|
+--------------------+----+----+---------+----------+---------+----------+---------+
|5ac52674ffff34c98...|IDFA|   1|42.377167| -71.06994|17.422535|1525319638|36.853622|
|5ac52674ffff34c98...|IDFA|   1| 42.37747|-71.069824|17.683573|1525319639|36.853622|
|5ac52674ffff34c98...|IDFA|   1| 42.37757| -71.06942|22.287935|1525319640|36.853622|
|5ac52674ffff34c98...|IDFA|   1| 42.37761| -71.06943|19.110023|1525319641|36.853622|
|5ac52674ffff34c98...|IDFA|   1|42.377243| -71.06952|18.904774|1525319642|36.853622|
|5ac52674ffff34c98...|IDFA|   1|42.378254| -71.06948|20.772903|1525319643|36.853622|
|5ac52674ffff34c98...|IDFA|   1| 42.37801| -71.06983|18.084948|1525319644|36.853622|
|5ac52674ffff34c98...|IDFA|   1|42.378693| -71.07033| 15.64326|1525319645|36.853622|
|5ac52674ffff34c98...|IDFA|   1|42.378723|-71.070335|21.093477|1525319646|36.853622|
|5ac52674ffff34c98...|IDFA|   1| 42.37868| -71.07034|21.851894|1525319647|36.853622|
|5ac52674ffff34c98...|IDFA|   1|42.378716| -71.07029|20.583202|1525319648|36.853622|
|5ac52674ffff34c98...|IDFA|   1| 42.37872| -71.07067|19.738768|1525319649|36.853622|
|5ac52674ffff34c98...|IDFA|   1|42.379112| -71.07097|20.480911|1525319650|36.853622|
|5ac52674ffff34c98...|IDFA|   1| 42.37952|  -71.0708|20.526752|1525319651| 44.93808|
|5ac52674ffff34c98...|IDFA|   1| 42.37902| -71.07056|20.534052|1525319652| 44.93808|
|5ac52674ffff34c98...|IDFA|   1|42.380203|  -71.0709|19.921381|1525319653| 44.93808|
|5ac52674ffff34c98...|IDFA|   1| 42.37968|-71.071144| 20.12599|1525319654| 44.93808|
|5ac52674ffff34c98...|IDFA|   1|42.379696| -71.07114|18.760069|1525319655| 36.77853|
|5ac52674ffff34c98...|IDFA|   1| 42.38011| -71.07123|19.155525|1525319656| 36.77853|
|5ac52674ffff34c98...|IDFA|   1| 42.38022|  -71.0712|16.978994|1525319657| 36.77853|
+--------------------+----+----+---------+----------+---------+----------+---------+
only showing top 20 rows

我想提取循环中每个唯一用户的信息,并将其转换为数据帧

对于第一个用户,这就是我要尝试的:

id0 = df.first().ID
tmpDF = df.filter((fs.col('ID')==id0))

这是可行的,但要把它转换成一个数据帧需要很长时间

tmpDF = tmpDF.toPandas()

Tags: 数据用户语言iddataframedfsqlshow
2条回答

以下是您正在查看的内容,df.select("ID").distinct().rdd.flatMap(lambda x: x).collect()提供了一个唯一的ID列表,您可以使用该列表filter您的spark数据帧和toPandas()可用于将spark数据帧转换为pandas数据帧

for i in df.select("ID").distinct().rdd.flatMap(lambda x: x).collect():
  tmp_df = df.filter(df.ID == i)
  user_pd_df = tmp_df.toPandas()

更新:由于问题已被编辑

toPandas()将数据帧中的所有记录收集到驱动程序,并且应该在数据的一小部分上完成。 如果您正试图将巨大的数据帧转换为熊猫,则需要花费大量的时间

您可以使用toPandas()将spark df转换为pandas

unique_df = df.select('ID').distinct()

unique_pandas_df = unique_df.toPandas()

相关问题 更多 >

    热门问题