pyspark使用正则表达式搜索关键字,然后与其他数据帧连接

2024-06-01 20:03:37 发布

您现在位置:Python中文网/ 问答频道 /正文

我有两个数据帧

数据帧A

name       groceries 
Mike       apple, orange, banana, noodle, red wine
Kate       white wine, green beans, extra pineapple hawaiian pizza
Leah       red wine, juice, rice, grapes, green beans
Ben        water, spaghetti

数据帧B

id       item
0001     red wine
0002     green beans

我将逐行检查B,并使用正则表达式搜索数据框A中的商品是否存在于杂货中

df = None
for keyword in B.select('item').rdd.flatMap(lambda x : x).collect():
    if keyword == None:
        continue
    pattern = '(?i)^'
    start = '(?=.*\\b'
    end = '\\b)'
    for word in re.split('\\s+', keyword):
        pattern = pattern + start + word + end
    pattern = pattern + '.*$'
    
    if df == None:
        df = A.filter(A['groceries'].rlike(pattern)).withColumn('item', F.lit(keyword))
    else:
        df = df.unionAll(A.filter(A['groceries'].rlike(pattern)).withColumn('item', F.lit(keyword)))

我想要的输出是A中包含来自B的项的行,但也包含作为新列插入的item关键字

name       groceries                                                     item
Mike       apple, orange, banana, noodle, red wine                       red wine
Leah       red wine, juice, rice, grapes, green beans                    red wine
Kate       white wine, green beans, extra pineapple hawaiian pizza       green beans
Leah       red wine, juice, rice, grapes, green beans                    green beans

实际输出不是我想要的,我不明白这种方法有什么不正确

我还想知道是否有一种方法可以使用rlike直接连接a和B,这样只有当a中的项目存在于B中的杂货店时,行才会连接。谢谢

更复杂的数据集

test1 = spark.createDataFrame([("Mike","apple, oranges, red wine"),("Kate","Whitewine, green beans waterrr, pineapple, red wine"), ("Leah", "red wine, juice, rice, grapes, green beans"),("Ben","Water,Spaghetti, the little prince 70th anniversary gift set (book/cd/downloadable audio)")],schema=["name","groceries"])
test2 = spark.createDataFrame([("001","red wine"),("002","green beans waterrr"), ("003", "the little prince 70th anniversary gift set (book/cd/downloadable audio)")],schema=["id","item"])
#%%
test_join =test1.join(test2,F.expr("""groceries rlike item"""),how='inner').show(truncate = False)
+----+---------------------------------------------------+---+-------------------+
|name|groceries                                          |id |item               |
+----+---------------------------------------------------+---+-------------------+
|Mike|apple, oranges, red wine                           |001|red wine           |
|Kate|Whitewine, green beans waterrr, pineapple, red wine|001|red wine           |
|Kate|Whitewine, green beans waterrr, pineapple, red wine|002|green beans waterrr|
|Leah|red wine, juice, rice, grapes, green beans         |001|red wine           |
+----+---------------------------------------------------+---+-------------------+

尽管“小王子70周年纪念礼品套装(书籍/cd/可下载音频)”的关键字完全匹配,但结果仍然不匹配

test1 = spark.createDataFrame([("Mike","apple, oranges, red wine"),("Kate","Whitewine, green beans waterrr, pineapple, red wine"), ("Leah", "red wine, juice, rice, grapes, green beans"),("Ben","Water,Spaghetti, the little prince 70th anniversary gift set (book/cd/downloadable audio)")],schema=["name","groceries"])
test2 = spark.createDataFrame([("001","red apple"),("002","green beans waterrr"), ("003", "the little prince 70th anniversary gift set (book/cd/downloadable audio)")],schema=["id","item"])

-------如果我用正则表达式做一个类似于下面的“红苹果”的搜索------------------

test1 = spark.createDataFrame([("Mike","apple, oranges, red wine"),("Kate","Whitewine, green beans waterrr, pineapple, red wine"), ("Leah", "red wine, juice, rice, grapes, green beans"),("Ben","Water,Spaghetti, the little prince 70th anniversary gift set (book/cd/downloadable audio)")],schema=["name","groceries"])
test2 = spark.createDataFrame([("001","red apple"),("002","green beans waterrr"), ("003", "the little prince 70th anniversary gift set (book/cd/downloadable audio)")],schema=["id","item"])

test_join = test1.filter(test1['groceries'].rlike('(?i)^(?=.*\\bred\\b)(?=.*\\bapple\\b).*$'))
+----+------------------------+
|name|groceries               |
+----+------------------------+
|Mike|apple, oranges, red wine|
+----+------------------------+

它会给我我想要的,因为我只想确认商品中的所有文字都存在于食品杂货中,即使它们已经坏了。然而,做下面的不会给我上面的匹配

test_join =test1.join(test2,F.expr("""groceries rlike item"""),how='inner').show(truncate = False)
test_join =test1.join(test2,F.col('groceries').contains(F.col('item')),how='inner')

解决方案:

def my_udf(keyword):
    if keyword == None:
        return ''
    pattern = '(?i)^'
    start = '(?=.*\\b'
    end = '\\b)'
    for word in re.split('\\s+', keyword):
        pattern = pattern + start + word + end
    pattern = pattern + '.*$'
    return pattern

regex_udf = udf(my_udf, T.StringType())
B = B.withColumn('regex', regex_udf(B['item']))

regex_join = A.join(B, F.expr("""groceries rlike regex"""), how = 'inner')

它设法做到了我想要的,但还是跑得很慢。这可能是因为join和udf的使用


Tags: nameapplegreenreditemkeywordpatternmike
1条回答
网友
1楼 · 发布于 2024-06-01 20:03:37

使用F.expr()可以进行类连接。在您的情况下,您需要将其与内部联接一起使用。试试这个

    #%%
import pyspark.sql.functions as F
test1 =sqlContext.createDataFrame([("Mike","apple,greenbeans,redwine,the little prince 70th anniversary gift set (book/cd/downloadable audio)" ),("kate","Whitewine,greenbeans,pineapple"),("Ben","Water,Spaghetti")],schema=["name","groceries"])
test2 = sqlContext.createDataFrame([("001","redwine"),("002","greenbeans"),("003","cd")],schema=["id","item"])
#%%
test_join =test1.join(test2,F.expr("""groceries rlike item"""),how='inner')

结果:

 test_join.show(truncate=False)
   +  +                                                -+ -+     +
|name|groceries                                                                                        |id |item      |
+  +                                                -+ -+     +
|Mike|apple,greenbeans,redwine,the little prince 70th anniversary gift set (book/cd/downloadable audio)|001|redwine   |
|Mike|apple,greenbeans,redwine,the little prince 70th anniversary gift set (book/cd/downloadable audio)|002|greenbeans|
|Mike|apple,greenbeans,redwine,the little prince 70th anniversary gift set (book/cd/downloadable audio)|003|cd        |
|kate|Whitewine,greenbeans,pineapple                                                                   |002|greenbeans|
+  +                                                -+ -+     +

对于复杂的数据集,contains()函数必须有效

import pyspark.sql.functions as F
test1 = spark.createDataFrame([("Mike","apple, oranges, red wine,green beans"),("Kate","Whitewine, green beans waterrr, pineapple, red wine"), ("Leah", "red wine, juice, rice, grapes, green beans"),("Ben","Water,Spaghetti, the little prince 70th anniversary gift set (book/cd/downloadable audio)")],schema=["name","groceries"])
test2 = spark.createDataFrame([("001","red wine"),("002","green beans waterrr"), ("003", "the little prince 70th anniversary gift set (book/cd/downloadable audio)")],schema=["id","item"])
#%%
test_join =test1.join(test2,F.col('groceries').contains(F.col('item')),how='inner')

结果:

+  +                                            -+ -+                                    +
|name|groceries                                                                                |id |item                                                                    |
+  +                                            -+ -+                                    +
|Mike|apple, oranges, red wine,green beans                                                     |001|red wine                                                                |
|Kate|Whitewine, green beans waterrr, pineapple, red wine                                      |001|red wine                                                                |
|Kate|Whitewine, green beans waterrr, pineapple, red wine                                      |002|green beans waterrr                                                     |
|Leah|red wine, juice, rice, grapes, green beans                                               |001|red wine                                                                |
|Ben |Water,Spaghetti, the little prince 70th anniversary gift set (book/cd/downloadable audio)|003|the little prince 70th anniversary gift set (book/cd/downloadable audio)|
+  +                                            -+ -+                                    +

相关问题 更多 >