有火花王吗
用例:我有一个100万行的数据帧,我希望一次处理5行json而不失去并行性
数据帧(df)示例:
+-------------+---------+
| col_a | col_b |
+-------------+---------+
| row1a | row1b |
| row2a | row2b |
| row3a | row3b |
| row4a | row4b |
| row5a | row5b |
| row6a | row6b |
| row7a | row7b |
| .. | .. |
+-------------+---------+
当前工作方案
zipwithindex公司
row_id_df = df.rdd.map(lambda x: json.dumps(x.asDict())).zipWithIndex().toDF(["item", "id"])
上一行将数据帧转换为
数据帧(行\u id \u df):
+--------------------------------------+--------+
| item | id |
+--------------------------------------+--------+
| {"col_a": "row1a", "col_b": "row1b"} | 0 |
| {"col_a": "row2a", "col_b": "row2b"} | 1 |
| {"col_a": "row3a", "col_b": "row3b"} | 2 |
| {"col_a": "row4a", "col_b": "row4b"} | 3 |
| {"col_a": "row5a", "col_b": "row5b"} | 4 |
| {"col_a": "row6a", "col_b": "row6b"} | 5 |
| {"col_a": "row7a", "col_b": "row7b"} | 6 |
| .. | .. |
+--------------------------------------+--------+
到目前为止,我已经有了所有id为的行,现在我使用表达式groupby将每5个项目分组到一个组中
splitBy = (floor(col("id") / lit(5)) * lit(5)) \
.cast(IntegerType()).alias("id")
row_id_df.groupBy(splitBy) \
.agg(collect_list(col("item"))) \
.select(col("collect_list(item)").alias("items")) \
.rdd.foreach(process_each_5)
process_each_5(data):
print(len(data.items)) // 5
我能做到这一点,工作非常好。但是,我觉得还有一种更简单的方法
最后,我需要结束上面的解释:
发件人:
+-------------+---------+
| col_a | col_b |
+-------------+---------+
| row1a | row1b |
| row2a | row2b |
| row3a | row3b |
| row4a | row4b |
| row5a | row5b |
| row6a | row6b |
| row7a | row7b |
| .. | .. |
+-------------+---------+
收件人:
+-------------------------------------------+
| items |
+-------------------------------------------+
| [{"col_a": "row1a", "col_b": "row1b"}, |
| {"col_a": "row2a", "col_b": "row2b"}, |
| {"col_a": "row3a", "col_b": "row3b"}, |
| {"col_a": "row4a", "col_b": "row4b"}, |
| {"col_a": "row5a", "col_b": "row5b"}] |
| [{"col_a": "row6a", "col_b": "row6b"}, |
| {"col_a": "row7a", "col_b": "row7b"},...]|
| .. |
+-------------------------------------------+
PS:我不想使用df.collect()
目前没有回答
相关问题 更多 >
编程相关推荐