用NLTK创建一个新的语料库

2024-06-26 00:13:05 发布

您现在位置:Python中文网/ 问答频道 /正文

我想我标题的答案通常是去看文献,但我浏览了一遍NLTK book,但它没有给出答案。我对Python有点陌生。

我有一堆.txt文件,我希望能够使用NLTK为语料库nltk_data提供的语料库函数。

我试过PlaintextCorpusReader但是我不能再继续了:

>>>import nltk
>>>from nltk.corpus import PlaintextCorpusReader
>>>corpus_root = './'
>>>newcorpus = PlaintextCorpusReader(corpus_root, '.*')
>>>newcorpus.words()

如何使用punkt分割newcorpus句子?我试过使用punkt函数,但是punkt函数不能读取PlaintextCorpusReader类?

你能告诉我如何将分段的数据写入文本文件吗?


Tags: 函数答案import标题rootcorpus文献语料库
3条回答
 >>> import nltk
 >>> from nltk.corpus import PlaintextCorpusReader
 >>> corpus_root = './'
 >>> newcorpus = PlaintextCorpusReader(corpus_root, '.*')
 """
 if the ./ dir contains the file my_corpus.txt, then you 
 can view say all the words it by doing this 
 """
 >>> newcorpus.words('my_corpus.txt')

经过几年的研究,以下是最新的教程

如何创建包含文本文件目录的NLTK语料库?

其主要思想是利用nltk.corpus.reader包。如果您有一个英文文本文件目录,最好使用PlaintextCorpusReader

如果您的目录如下所示:

newcorpus/
         file1.txt
         file2.txt
         ...

只要使用这些代码行,就可以得到一个语料库:

import os
from nltk.corpus.reader.plaintext import PlaintextCorpusReader

corpusdir = 'newcorpus/' # Directory of corpus.

newcorpus = PlaintextCorpusReader(corpusdir, '.*')

注意:使用默认的nltk.tokenize.sent_tokenize()nltk.tokenize.word_tokenize()将文本拆分成句子和单词,这些函数是为英语构建的,它可能不适用于所有语言。

下面是创建测试文本文件的完整代码,以及如何使用NLTK创建语料库,以及如何在不同级别访问语料库:

import os
from nltk.corpus.reader.plaintext import PlaintextCorpusReader

# Let's create a corpus with 2 texts in different textfile.
txt1 = """This is a foo bar sentence.\nAnd this is the first txtfile in the corpus."""
txt2 = """Are you a foo bar? Yes I am. Possibly, everyone is.\n"""
corpus = [txt1,txt2]

# Make new dir for the corpus.
corpusdir = 'newcorpus/'
if not os.path.isdir(corpusdir):
    os.mkdir(corpusdir)

# Output the files into the directory.
filename = 0
for text in corpus:
    filename+=1
    with open(corpusdir+str(filename)+'.txt','w') as fout:
        print>>fout, text

# Check that our corpus do exist and the files are correct.
assert os.path.isdir(corpusdir)
for infile, text in zip(sorted(os.listdir(corpusdir)),corpus):
    assert open(corpusdir+infile,'r').read().strip() == text.strip()


# Create a new corpus by specifying the parameters
# (1) directory of the new corpus
# (2) the fileids of the corpus
# NOTE: in this case the fileids are simply the filenames.
newcorpus = PlaintextCorpusReader('newcorpus/', '.*')

# Access each file in the corpus.
for infile in sorted(newcorpus.fileids()):
    print infile # The fileids of each file.
    with newcorpus.open(infile) as fin: # Opens the file.
        print fin.read().strip() # Prints the content of the file
print

# Access the plaintext; outputs pure string/basestring.
print newcorpus.raw().strip()
print 

# Access paragraphs in the corpus. (list of list of list of strings)
# NOTE: NLTK automatically calls nltk.tokenize.sent_tokenize and 
#       nltk.tokenize.word_tokenize.
#
# Each element in the outermost list is a paragraph, and
# Each paragraph contains sentence(s), and
# Each sentence contains token(s)
print newcorpus.paras()
print

# To access pargraphs of a specific fileid.
print newcorpus.paras(newcorpus.fileids()[0])

# Access sentences in the corpus. (list of list of strings)
# NOTE: That the texts are flattened into sentences that contains tokens.
print newcorpus.sents()
print

# To access sentences of a specific fileid.
print newcorpus.sents(newcorpus.fileids()[0])

# Access just tokens/words in the corpus. (list of strings)
print newcorpus.words()

# To access tokens of a specific fileid.
print newcorpus.words(newcorpus.fileids()[0])

最后,要读取文本目录并用其他语言创建NLTK语料库,必须首先确保有一个python可调用的单词标记化模块和句子标记化模块,它们接受string/basestring输入并生成这样的输出:

>>> from nltk.tokenize import sent_tokenize, word_tokenize
>>> txt1 = """This is a foo bar sentence.\nAnd this is the first txtfile in the corpus."""
>>> sent_tokenize(txt1)
['This is a foo bar sentence.', 'And this is the first txtfile in the corpus.']
>>> word_tokenize(sent_tokenize(txt1)[0])
['This', 'is', 'a', 'foo', 'bar', 'sentence', '.']

我认为PlaintextCorpusReader已经用punkt标记器分段了输入,至少如果您的输入语言是英语。

PlainTextCorpusReader's constructor

def __init__(self, root, fileids,
             word_tokenizer=WordPunctTokenizer(),
             sent_tokenizer=nltk.data.LazyLoader(
                 'tokenizers/punkt/english.pickle'),
             para_block_reader=read_blankline_block,
             encoding='utf8'):

您可以向读取器传递单词和句子标记器,但对于后者,默认值已经是nltk.data.LazyLoader('tokenizers/punkt/english.pickle')

对于单个字符串,将按如下方式使用标记器(解释为here,请参阅第5节中的punkt标记器)。

>>> import nltk.data
>>> text = """
... Punkt knows that the periods in Mr. Smith and Johann S. Bach
... do not mark sentence boundaries.  And sometimes sentences
... can start with non-capitalized words.  i is a good variable
... name.
... """
>>> tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
>>> tokenizer.tokenize(text.strip())

相关问题 更多 >