我正在尝试把一系列的意大利语推特翻译成英语。它们包含在一个csv文件中,所以我用pandas提取它们来计算维德的情绪。不幸的是,我得到了这个错误json.decoder.JSONDecodeError:期望值:行1列1(字符0)。在
我尝试过从tweet中删除表情符号,并使用其他帖子中所示的vpn,但都不起作用。在
def remove_emoji(text):
return emoji.get_emoji_regexp().sub(u'', text)
def extract_emojis(str):
return ''.join(c for c in str if c in emoji.UNICODE_EMOJI)
def clean_emojis(text):
toreturn = ""
for c in text:
if c not in emoji.UNICODE_EMOJI:
toreturn += c
return toreturn
def sentiment_analyzer_scores(text, engl=True):
if engl:
translation = text
else:
try:
emojis = extract_emojis(text)
text = clean_emojis(text)
demoji.replace(text)
text = remove_emoji(text)
text = text.encode('ascii', 'ignore').decode('ascii')
# translator= Translator(from_lang="Italian",to_lang="English")
# translation = translator.translate(text)
translation = translator.translate(text).text
# print(translation)
except Error as e:
print(text)
print(e)
pass
text = translation + emojis
# print(text)
score = analyser.polarity_scores(text)
return score['compound']
def anl_tweets(lst, engl=True):
sents = []
id = 0
for tweet_text in lst:
try:
sentiment = sentiment_analyzer_scores(tweet_text, engl)
sents.append(sentiment)
id = id + 1
print("Sentiment del tweet n° %s = %s" % (id, sentiment))
except Error as e:
sents.append(0)
print(e)
return sents
#Main
translator = Translator()
analyser = SentimentIntensityAnalyzer()
file_name = 'file.csv'
df = pd.read_csv(file_name)
print(df.shape)
# Calculate Sentiment and add column
df['tweet_sentiment'] = anl_tweets(df.tweet_text, False)
# Save the modifies
df.to_csv(file_name, encoding='utf-8', index=False)
这和emoji没有关系,Google对你可以翻译的字符数量有限制,当你达到这个限制时,googleapi会简单地阻止你。在
了解配额here 简单的解决方案是将脚本分成多个块并使用代理服务器/不同的IP地址。在
另一个选项是https://pypi.org/project/translate/ (不过我还没试过)
相关问题 更多 >
编程相关推荐