当我将文本文件作为输入时,如何将pos标记的文件作为输出?

2024-09-25 12:24:57 发布

您现在位置:Python中文网/ 问答频道 /正文

这是我正在尝试的代码,但代码正在生成错误

import nltk
from nltk.corpus import stopwords 
from nltk.tokenize import word_tokenize, sent_tokenize 
stop_words = set(stopwords.words('english')) 

file_content = open("Dictionary.txt").read()
tokens = nltk.word_tokenize(file_content)

# sent_tokenize is one of instances of 
# PunktSentenceTokenizer from the nltk.tokenize.punkt module 

tokenized = sent_tokenize(tokens) 
for i in tokenized: 

    # Word tokenizers is used to find the words 
    # and punctuation in a string 
    wordsList = nltk.word_tokenize(i) 

    # removing stop words from wordList 
    wordsList = [w for w in wordsList if not w in stop_words] 

    # Using a Tagger. Which is part-of-speech 
    # tagger or POS-tagger. 
    tagged = nltk.pos_tag(wordsList) 

    print(tagged) 

错误:

Traceback (most recent call last): File "tag.py", line 12, in tokenized = sent_tokenize(tokens) File "/home/mahadev/anaconda3/lib/python3.7/site-packages/nltk/tokenize/init.py",

line 105, in sent_tokenize return tokenizer.tokenize(text) File "/home/mahadev/anaconda3/lib/python3.7/site-packages/nltk/tokenize/punkt.py",

line 1269, in tokenize return list(self.sentences_from_text(text, realign_boundaries)) File "/home/mahadev/anaconda3/lib/python3.7/site-packages/nltk/tokenize/punkt.py",

line 1323, in sentences_from_text return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)] File "/home/mahadev/anaconda3/lib/python3.7/site-packages/nltk/tokenize/punkt.py",

line 1323, in return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)] File "/home/mahadev/anaconda3/lib/python3.7/site-packages/nltk/tokenize/punkt.py",

line 1313, in span_tokenize for sl in slices: File "/home/mahadev/anaconda3/lib/python3.7/site-packages/nltk/tokenize/punkt.py",

line 1354, in _realign_boundaries for sl1, sl2 in _pair_iter(slices): File "/home/mahadev/anaconda3/lib/python3.7/site-packages/nltk/tokenize/punkt.py",

line 317, in _pair_iter prev = next(it) File "/home/mahadev/anaconda3/lib/python3.7/site-packages/nltk/tokenize/punkt.py",

line 1327, in _slices_from_text for match in self._lang_vars.period_context_re().finditer(text): TypeError: expected string or bytes-like object


Tags: textinfrompyhomelibpackagesline
1条回答
网友
1楼 · 发布于 2024-09-25 12:24:57

不知道您的代码应该做什么,但是您得到的错误是由令牌变量的数据类型引起的。它需要字符串,但得到的是不同数据类型的列表

您应该将该行更改为:

tokens = str(nltk.word_tokenize(file_content))

相关问题 更多 >