PySpark数据帧筛选器嵌套列

2024-09-27 00:23:15 发布

您现在位置:Python中文网/ 问答频道 /正文

我知道有很多类似的问题,但我没有找到任何与我的场景完全匹配的问题,所以请不要对重复标志感到高兴。我在Azure Databricks中使用Spark 3.0.1开发Python3笔记本

我有以下数据帧

+---+---------+--------+
|ID |FirstName|LastName|
+---+---------+--------+
|1  |John     |Doe     |
|2  |Michael  |        |
|3  |Angela   |Merkel  |
+---+---------+--------+

可以使用此代码创建的

from pyspark.sql.types import StructType,StructField, StringType, IntegerType
import pyspark.sql.functions as F

data2 = [(1,"John","Doe"),
    (2,"Michael",""),
    (3,"Angela","Merkel")
  ]

schema = StructType([ \
    StructField("ID",IntegerType(),True), \
    StructField("FirstName",StringType(),True), \
    StructField("LastName",StringType(),True), \
  ])
 
df1 = spark.createDataFrame(data=data2,schema=schema)
df1.printSchema()
df1.show(truncate=False)

我将其转换为这个数据帧

+---+-----------------------------------------+
|ID |Names                                    |
+---+-----------------------------------------+
|1  |[[FirstName, John], [LastName, Doe]]     |
|2  |[[FirstName, Michael], [LastName, ]]     |
|3  |[[FirstName, Angela], [LastName, Merkel]]|
+---+-----------------------------------------+

使用此代码

df2 = df1.select(
            'ID', 
            F.array(
                F.struct(
                    F.lit('FirstName').alias('NameType'), 
                    F.col('FirstName').alias('Name')
                ), 
                F.struct(
                    F.lit('LastName').alias('NameType'), 
                    F.col('LastName').alias('Name')
                )
            ).alias('Names')
        )

df2.printSchema()
df2.show(truncate=False)

现在,我试图过滤掉Names,其中LastName为空或为空字符串。 我的总体目标是拥有一个可以在json中序列化的对象,其中Names值为空的Name被排除在外

像这样

[
    {
        "ID": 1,
        "Names": [
            {
                "NameType": "FirstName",
                "Name": "John"
            },
            {
                "NameType": "LastName",
                "Name": "Doe"
            }
        ]
    },
    {
        "ID": 2,
        "Names": [
            {
                "NameType": "FirstName",
                "Name": "Michael"
            }
        ]
    },
    {
        "ID": 3,
        "Names": [
            {
                "NameType": "FirstName",
                "Name": "Angela"
            },
            {
                "NameType": "LastName",
                "Name": "Merkel"
            }
        ]
    }
]

我试过

df2 = df1.select(
            'ID', 
            F.array(
                F.struct(
                    F.lit('FirstName').alias('NameType'), 
                    F.col('FirstName').alias('Name')
                ), 
                F.struct(
                    F.lit('LastName').alias('NameType'), 
                    F.col('LastName').alias('Name')
                )
            ).filter(lambda x: x.col('LastName').isNotNull()).alias('Names')
        )

但是我得到了错误'Column' object is not callable

我也尝试过df2 = df2.filter(F.col('Names')['LastName']) > 0),但这给了我一个invalid syntax错误

我试过了

df2 = df2.filter(lambda x: (len(x)>0), F.col('Names')['LastName'])

但这会产生错误TypeError: filter() takes 2 positional arguments but 3 were given

有人能告诉我如何让这个工作吗


Tags: nameidnamesaliascolfirstnamejohndf1
1条回答
网友
1楼 · 发布于 2024-09-27 00:23:15

您可以使用高阶函数filter

import pyspark.sql.functions as F

df3 = df2.withColumn(
    'Names', 
    F.expr("filter(Names, x -> case when x.NameType = 'LastName' and length(x.Name) = 0 then false else true end)")
)

df3.show(truncate=False)
+ -+                    -+
|ID |Names                                    |
+ -+                    -+
|1  |[[FirstName, John], [LastName, Doe]]     |
|2  |[[FirstName, Michael]]                   |
|3  |[[FirstName, Angela], [LastName, Merkel]]|
+ -+                    -+

相关问题 更多 >

    热门问题