查看字符串是否在不同的数据帧中包含子字符串

2024-05-01 23:09:22 发布

您现在位置:Python中文网/ 问答频道 /正文

我有2个数据帧:

dfu 1,列id只包含字符和数字==>;标准化,并且id_no_normalized 示例:

 id_normalized   |  id_no_normalized
    -------------|-------------------
    ABC          |  A_B.C
    -------------|-------------------
    ERFD         |  E.R_FD
    -------------|-------------------
    12ZED        |   12_Z.ED

数据框2,列name只包含字符,数字==>

示例:

name
----------------------------
googleisa12ZEDgoodnavigator
----------------------------
internetABCexplorer
----------------------------

我想看看id_normalized (dataset_1)是否存在于name (dataset_2)。如果找到它,我就取id_no_normalized的值,并将它存储在dataset_2的一个新列中

预期结果:

   name                         |   result
    ----------------------------|----------
    googleisa12ZEDgoodnavigator |  12_Z.ED
    ----------------------------|----------
    internetABCexplorer         |  A_B.C
    ----------------------------|----------

我是用这个代码做的:

df_result = df_2.withColumn("id_no_normalized", dft_2.name.contains(df_1.id_normalized))
    return df_result.select("name", "id_normalized")

不起作用,因为它在df\u 2中找不到id_normalized。你知道吗

Second solution, it work only when I limited the output on 300 rows almost, but when I return all the data, is took many time running and not finish:

   df_1 = df_1.select("id_no_normalized").drop_duplicates()
df_1 = df_1.withColumn(
    "id_normalized",
    F.regexp_replace(F.col("id_no_normalized"), "[^a-zA-Z0-9]+", ""))
df_2 = df_2.select("name")
extract = F.expr('position(id_normalized IN name)>0')
result = df_1.join(df_2, extract)
return result

如何更正代码以解决此问题? 谢谢


Tags: 数据nonameid示例dfreturn数字
1条回答
网友
1楼 · 发布于 2024-05-01 23:09:22

我们可以使用交叉连接和在新的DF上应用UDF来解决这个问题,但是我们需要再次确保它在一个大的数据集上工作。你知道吗

from pyspark.sql.functions import udf
from pyspark.sql.types import IntegerType

data1 = [
 {"id_normalized":"ABC","id_no_normalized":"A_B.C"},
 {"id_normalized":"ERFD","id_no_normalized":"E.R_FD"},
 {"id_normalized":"12ZED","id_no_normalized":"12_Z.ED"}
]

data2 = [
 {"name": "googleisa12ZEDgoodnavigator"},
 {"name": "internetABCexplorer"}
]

df1 = spark.createDataFrame(data1, ["id_no_normalized", "id_normalized"])
df2 = spark.createDataFrame(data2, ["name"])

df3 = df1.crossJoin(df2)
search_for_udf = udf(lambda name,id_normalized: name.find(id_normalized), returnType=IntegerType())
df4 = df3.withColumn("contain", search_for_udf(df3["name"], df3["id_normalized"]))
df4.filter(df4["contain"] > -1).show()


>>> df4.filter(df4["contain"] > -1).show()
+        +      -+          +   -+
|id_no_normalized|id_normalized|                name|contain|
+        +      -+          +   -+
|           A_B.C|          ABC| internetABCexplorer|      8|
|         12_Z.ED|        12ZED|googleisa12ZEDgoo...|      9|
+        +      -+          +   -+

我相信有一些spark技术可以使交叉连接更有效。你知道吗

相关问题 更多 >