将嵌套字典转换为Pyspark数据帧

2024-09-22 20:35:52 发布

您现在位置:Python中文网/ 问答频道 /正文

向程序员同事问好

我最近开始与pyspark合作,来自熊猫的背景。我需要计算数据中用户之间的相似性。由于我无法从pyspark中找到,我求助于使用python字典来创建一个相似性数据框

但是,我没有办法将嵌套字典转换为pyspark数据帧。 你能为我提供一个实现这一预期结果的方向吗

import pyspark
from pyspark.context import SparkContext
from pyspark.sql import SparkSession
from scipy.spatial import distance


spark = SparkSession.builder.getOrCreate()

from pyspark.sql import *

traindf = spark.createDataFrame([
    ('u11',[1, 2, 3]),
    ('u12',[4, 5, 6]),
    ('u13',[7, 8, 9])
]).toDF("user","rating")

traindf.show()

输出

+----+---------+
|user|   rating|
+----+---------+
| u11|[1, 2, 3]|
| u12|[4, 5, 6]|
| u13|[7, 8, 9]|
+----+---------+

它希望在用户之间生成相似性,并将其放在pyspark数据帧中

parent_dict = {}
for parent_row in traindf.collect():
#     print(parent_row['user'],parent_row['rating'])
    child_dict = {}
    for child_row in traindf.collect():
        similarity = distance.cosine(parent_row['rating'],child_row['rating'])
        child_dict[child_row['user']] = similarity
    parent_dict[parent_row['user']] = child_dict

print(parent_dict)

输出:

{'u11': {'u11': 0.0, 'u12': 0.0253681538029239, 'u13': 0.0405880544333298},
 'u12': {'u11': 0.0253681538029239, 'u12': 0.0, 'u13': 0.001809107314273195},
 'u13': {'u11': 0.0405880544333298, 'u12': 0.001809107314273195, 'u13': 0.0}}

从这个字典中,我想构造一个pyspark数据帧

+-----+-----+--------------------+
|user1|user2|          similarity|
+-----+-----+--------------------+
|  u11|  u11|                 0.0|
|  u11|  u12|  0.0253681538029239|
|  u11|  u13|  0.0405880544333298|
|  u12|  u11|  0.0253681538029239|
|  u12|  u12|                 0.0|
|  u12|  u13|0.001809107314273195|
|  u13|  u11|  0.0405880544333298|
|  u13|  u12|0.001809107314273195|
|  u13|  u13|                 0.0|
+-----+-----+--------------------+

到目前为止,我尝试将dict转换为pandas数据帧,并将其转换为pyspark数据帧。然而,我需要大规模地做这件事,我正在寻找更具火花的方式来做这件事

parent_user = []
child_user = []
child_similarity = []

for parent_row in traindf.collect():
    
    for child_row in traindf.collect():
        similarity = distance.cosine(parent_row['rating'],child_row['rating'])
        child_user.append(child_row['user'])
        child_similarity.append(similarity)
        parent_user.append(parent_row['user'])

my_dict = {}
my_dict['user1'] = parent_user
my_dict['user2'] = child_user
my_dict['similarity'] = child_similarity

import pandas as pd

pd.DataFrame(my_dict)
df = spark.createDataFrame(pd.DataFrame(my_dict))
df.show()

输出:

+-----+-----+--------------------+
|user1|user2|          similarity|
+-----+-----+--------------------+
|  u11|  u11|                 0.0|
|  u11|  u12|  0.0253681538029239|
|  u11|  u13|  0.0405880544333298|
|  u12|  u11|  0.0253681538029239|
|  u12|  u12|                 0.0|
|  u12|  u13|0.001809107314273195|
|  u13|  u11|  0.0405880544333298|
|  u13|  u12|0.001809107314273195|
|  u13|  u13|                 0.0|
+-----+-----+--------------------+

Tags: 数据importchildmydictpysparkparentrow
3条回答

也许你可以这样做:

import pandas as pd
from pyspark.sql import SQLContext

my_dic = {'u11': {'u11': 0.0, 'u12': 0.0253681538029239, 'u13': 0.0405880544333298},
                 'u12': {'u11': 0.0253681538029239, 'u12': 0.0, 'u13': 0.001809107314273195},
                 'u13': {'u11': 0.0405880544333298, 'u12': 0.001809107314273195, 'u13': 0.0}}

df =  pd.DataFrame.from_dict(my_dic).unstack().to_frame().reset_index()
df.columns = ['user1', 'user2', 'similarity']
sqlCtx = SQLContext(sc) # sc is spark context
sqlCtx.createDataFrame(df).show()

好了,现在你的问题更清楚了。我假设您从用户、评级的spark数据帧开始。 您要做的是将此DF与自身进行外部连接,这将创建一个具有所有可能的用户对(及其评级)的叉积,包括重复两次的同一用户行(这些可以稍后过滤),然后计算包含相似性的新列

from pyspark.sql.types import *
import pyspark.sql.functions as psf

def cos_sim(a,b):
    return float(np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b)))

dot_udf = psf.udf(lambda x,y: cos_sim(x,y), FloatType())

data.alias("i").join(data.alias("j"), psf.col("i.user") != psf.col("j.user"))\
    .select(
        psf.col("i.user").alias("user1"), 
        psf.col("j.user").alias("user2"), 
        dot_udf("i.rating", "j.rating").alias("similarity"))\
    .sort("similarity")\
    .show()

输出符合要求:

+  -+  -+     +
|user1|user2|similarity|
+  -+  -+     +
|  u11|  u12|0.70710677|
|  u13|  u11|0.70710677|
|  u11|  u13|0.70710677|
|  u12|  u11|0.70710677|
|  u12|  u13|       1.0|
|  u13|  u12|       1.0|
+  -+  -+     +

相关问题 更多 >