如何修复TypeError:pyspark数据帧中不可损坏的类型:“列”?

2024-09-24 02:19:54 发布

您现在位置:Python中文网/ 问答频道 /正文

我有一个数据框,每行包含一个列表

例如:

+--------------------+-----+
|             removed|stars|
+--------------------+-----+
|[giant, best, buy...|  3.0|
|[wow, surprised, ...|  4.0|
|[one, day, satisf...|  3.0|

我想在每一行用柠檬汁涂抹

from nltk.stem import WordNetLemmatizer 
lemmatizer = WordNetLemmatizer()
df_list = df_removed.withColumn("removed",lemmatizer.lemmatize(df_removed["removed"]))

我得到一个错误:

TypeError: unhashable type: 'Column'

我不想使用rddmap函数,只需在dataframe上使用lemmatizer。 我该怎么做?如何修复此错误


Tags: 数据df列表错误buyonebeststars
1条回答
网友
1楼 · 发布于 2024-09-24 02:19:54

FreqDist函数接受可散列对象的iterable(使其成为字符串,但它可能与任何对象一起工作)。您得到的错误是因为您传入了一个列表的iterable。正如您所建议的,这是因为您所做的更改:

df['tokenized_sents'] = df['Responses'].apply(nltk.word_tokenize)

如果我正确理解了Pandas apply function documentation,那么这一行将nltk.word_tokenize函数应用于某个系列word-tokenize返回单词列表

作为解决方案,在尝试应用FreqDist之前,只需将列表添加到一起,如下所示:

allWords = []
for wordList in words:
    allWords += wordList
FreqDist(allWords)

一个更完整的修订,做你想做的。如果您只需要识别第二组100,请注意mclist将在第二次识别

df = pd.read_csv('CountryResponses.csv', encoding='utf-8', skiprows=0, error_bad_lines=False)

tokenizer = RegexpTokenizer(r'\w+')
df['tokenized_sents'] = df['Responses'].apply(nltk.word_tokenize)

lists =  df['tokenized_sents']
words = []
for wordList in lists:
    words += wordList

#remove 100 most common words based on Brown corpus
fdist = FreqDist(brown.words())
mostcommon = fdist.most_common(100)
mclist = []
for i in range(len(mostcommon)):
    mclist.append(mostcommon[i][0])
words = [w for w in words if w not in mclist]

#Out: ['the',
# ',',
# '.',
# 'of',
# 'and',
#...]

#keep only most common words
fdist = FreqDist(words)
mostcommon = fdist.most_common(100)
mclist = []
for i in range(len(mostcommon)):
    mclist.append(mostcommon[i][0])
# mclist contains second-most common set of 100 words
words = [w for w in words if w in mclist]
# this will keep ALL occurrences of the words in mclist

相关问题 更多 >