PySpark:根据数组_包含的内容连接数据帧列

2024-09-30 01:34:22 发布

您现在位置:Python中文网/ 问答频道 /正文

我有两个数据帧:

sdf1 = spark.createDataFrame([
    ("123", "A", [1, 2, 3]),
    ("123","B", [4, 5]),
    ("456","C", [1, 2]),
    ("456","D", [3, 4, 5]),
], ["id1", "name", "resources"])

sdf2 = spark.createDataFrame([
    ("123", 1, "R1"),
    ("123", 2, "R2"),
    ("123", 3, "R3"),
    ("123", 4, "R4"),
    ("123", 5, "R5"),
    ("456", 1, "R1"),
    ("456", 2, "R2"),
    ("456", 3, "R7"),
    ("456", 4, "R8"),
    ("456", 5, "R9")
], ["id2", "resource_id", "name"])

预期结果:

+----+-----+-----------+-------------+
|id1 |name |resources  |New Column   |
+----+-----+-----------+-------------+
|123 |A    |[1, 2, 3]  |[R1, R2, R3] |
|123 |B    |[4, 5]     |[R4, R5]     |
|456 |C    |[1, 2]     |[R1, R2]     |
|456 |D    |[3, 4, 5]  |[R7, R8, R9] |
+----+---------+------+--------------+ 

我试着这样做:

res_sdf = sdf1.join(sdf2, on=[(sdf1.id1 == sdf2.id2) & array_contains(sdf1.resources, sdf2.resource_id)], how='left')

但是我得到了一个错误:TypeError: Column is not iterable

正确的方法是什么

谢谢


Tags: namesparkresourcesr2r3r5r8r1
1条回答
网友
1楼 · 发布于 2024-09-30 01:34:22

请尝试以下代码:

    from pyspark.sql.functions import udf , collect_list

    contain_udf = udf(lambda x , y : x in y)

    res_sdf = sdf1.join(sdf2, on=[(sdf1.id1 == sdf2.id2)] ,how ='left').filter(contain_udf("resource_id","resources") == True)
    res_sdf = res_sdf.groupBy(sdf1.id1,sdf1.name,"resources").agg(collect_list(sdf2.name).alias("New Column")).orderBy("id1")

相关问题 更多 >

    热门问题