如何切片直到最后一项形成新列?

2024-07-05 10:29:03 发布

您现在位置:Python中文网/ 问答频道 /正文

我有一个如下格式的数据帧

+-------------------------------------------------------------------------------------------------+
|value                                                                                            |
+-------------------------------------------------------------------------------------------------+
|datalake-performance/raw/bamboohr/bamboohr_custom_turnover_data/2020/12/10/11:15.csv             |
|datalake-performance/raw/gitlab/002429d9-908c-497b-96ba-67794b31f0cd                             |
|datalake-performance/processed/bamboohr/employee/04-08-2020/16:23.csv                            |
|datalake-performance/raw/zoom/user/year=2020/month=09/day=22/a329affc-b1f5-45d1-932a-fbb13d9873d6|
+-------------------------------------------------------------------------------------------------+

我想生成以下格式的新列:

newcol
[bamboohr_custom_turnover_data, 2020, 12, 10]
[]
[employee, 04-08-2020]
[user, year=2020, month=09, day=22]

对熊猫这样做,看起来像这样

df['value'].str.split('/').str[3:-1]  

我已使用PySpark尝试了以下操作,但收到一个错误

df = df.withColumn("list", (split(col("value"), "/")))    
df.select(slice(df["list"], 3, size(df["list"]) - (3 + 1)))

TypeError: Column is not iterable

如何通过PySpark中的[3:-1]获取切片


Tags: csvdfdatarawvalue格式performancecustom
3条回答

您可以使用sparksql函数slicesize来实现切片。请注意,Spark SQL数组索引从1开始,而不是从0开始

df2 = df.selectExpr("slice(split(value, '/'), 4, size(split(value, '/')) - 4) newcol")

df2.show(truncate=False)
+                      -+
|newcol                                       |
+                      -+
|[bamboohr_custom_turnover_data, 2020, 12, 10]|
|[]                                           |
|[employee, 04-08-2020]                       |
|[user, year=2020, month=09, day=22]          |
+                      -+

你可以试试这样的东西-

import pyspark.sql.functions as F

df_updated = df.withColumn("new value",df.select(F.split(df.value,"/")).rdd.flatMap(
            lambda x: x[3:-1]))

其他参考文件-here

slice函数也可以接受负索引start,以便从末尾开始。您需要4个部分,忽略最后一个部分,因此从-5开始,取4:

from pyspark.sql.functions import col, split, slice

df = df.withColumn("newcol", slice(split(col("value"), "/"), -5, 4)) 
df.select("newcol").show(truncate=False)

#+                      -+
#|newcol                                       |
#+                      -+
#|[bamboohr_custom_turnover_data, 2020, 12, 10]|
#|[]                                           |
#|[processed, bamboohr, employee, 04-08-2020]  |
#|[user, year=2020, month=09, day=22]          |
#+                      -+

相关问题 更多 >