如何使用nltk标记单词列表?

2024-10-01 02:35:29 发布

您现在位置:Python中文网/ 问答频道 /正文

我有一个文本数据集。这些数据集由许多行组成,每行由两个按tab拆分的句子组成,如下所示:

this is string 1, first sentence.    this is string 2, first sentence.
this is string 1, second sentence.    this is string 2, second sentence.

然后我用以下代码拆分了datatext:

#file readdata.py
from globalvariable import *
import os


class readdata:
    def dataAyat(self):
        global kalimatayat
        fo = open(os.path.join('E:\dataset','dataset.txt'),"r")
        line = []
        for line in fo.readlines():
            datatxt = line.rstrip('\n').split('\t')
            newdatatxt = [x.split('\t') for x in datatxt]
            kalimatayat.append(newdatatxt)
            print newdatatxt

readdata().dataAyat()

它工作正常,输出为:

[['this is string 1, first sentence.'],['this is string 2, first sentence.']]
[['this is string 1, second sentence.'],['this is string 2, second sentence.']]

我想做的是使用nltk word tokenize对这些列表进行标记化,我期望的输出如下:

[['this' , 'is' , 'string' , '1' , ',' , 'first' , 'sentence' , '.'],['this' , 'is' , 'string' , '2' , ',' , 'first' , 'sentence' , '.']]
[['this' , 'is' , 'string' , '1' , ',' , 'second' , 'sentence' , '.'],['this' , 'is' , 'string' , '2' , ',' , 'second' , 'sentence' , '.']]

有人知道如何标记为上面的输出吗? 我想在“tokenizer.py”中编写一个tokenize函数,并在“mainfile.py”中调用它


Tags: 数据pyimportstringisoslinethis
1条回答
网友
1楼 · 发布于 2024-10-01 02:35:29

要标记句子列表,请在其上迭代并将结果存储在列表中:

data = [[['this is string 1, first sentence.'],['this is string 2, first sentence.']],
[['this is string 1, second sentence.'],['this is string 2, second sentence.']]]
results = []
for sentence in data:
    sentence_results = []
    for s in sentence:
        sentence_results.append(nltk.word_tokenize(sentence))
    results.append(sentence_results)

结果会是这样的

[[['this' , 'is' , 'string' , '1' , ',' , 'first' , 'sentence' , '.'],  
  ['this' , 'is' , 'string' , '2' , ',' , 'first' , 'sentence' , '.']], 
[['this' , 'is' , 'string' , '1' , ',' , 'second' , 'sentence' , '.'],
  ['this' , 'is' , 'string' , '2' , ',' , 'second' , 'sentence' , '.']]]

相关问题 更多 >