<p>@275365的NLTK贝叶斯分类器数据结构教程很棒。从更高的层面来看,我们可以把它看作</p>
<p>我们有带有情感标签的输入语句:</p>
<pre><code>training_data = [('I love this sandwich.', 'pos'),
('This is an amazing place!', 'pos'),
('I feel very good about these beers.', 'pos'),
('This is my best work.', 'pos'),
("What an awesome view", 'pos'),
('I do not like this restaurant', 'neg'),
('I am tired of this stuff.', 'neg'),
("I can't deal with this", 'neg'),
('He is my sworn enemy!', 'neg'),
('My boss is horrible.', 'neg')]
</code></pre>
<p>让我们将我们的功能集视为单个单词,因此我们从训练数据(我们称之为词汇表)中提取所有可能单词的列表,如下所示:</p>
<pre><code>from nltk.tokenize import word_tokenize
from itertools import chain
vocabulary = set(chain(*[word_tokenize(i[0].lower()) for i in training_data]))
</code></pre>
<p>本质上,这里的<code>vocabulary</code>与@275365的<code>all_word</code>是相同的</p>
<pre><code>>>> all_words = set(word.lower() for passage in training_data for word in word_tokenize(passage[0]))
>>> vocabulary = set(chain(*[word_tokenize(i[0].lower()) for i in training_data]))
>>> print vocabulary == all_words
True
</code></pre>
<p>从每个数据点(即每个句子和pos/neg标记),我们想说一个特征(即词汇中的一个词)是否存在。</p>
<pre><code>>>> sentence = word_tokenize('I love this sandwich.'.lower())
>>> print {i:True for i in vocabulary if i in sentence}
{'this': True, 'i': True, 'sandwich': True, 'love': True, '.': True}
</code></pre>
<p>但我们也要告诉分类器哪些单词不存在于句子中,而是存在于词汇中,因此对于每个数据点,我们列出词汇中所有可能的单词,并说明某个单词是否存在:</p>
<pre><code>>>> sentence = word_tokenize('I love this sandwich.'.lower())
>>> x = {i:True for i in vocabulary if i in sentence}
>>> y = {i:False for i in vocabulary if i not in sentence}
>>> x.update(y)
>>> print x
{'love': True, 'deal': False, 'tired': False, 'feel': False, 'is': False, 'am': False, 'an': False, 'good': False, 'best': False, '!': False, 'these': False, 'what': False, '.': True, 'amazing': False, 'horrible': False, 'sworn': False, 'ca': False, 'do': False, 'sandwich': True, 'very': False, 'boss': False, 'beers': False, 'not': False, 'with': False, 'he': False, 'enemy': False, 'about': False, 'like': False, 'restaurant': False, 'this': True, 'of': False, 'work': False, "n't": False, 'i': True, 'stuff': False, 'place': False, 'my': False, 'awesome': False, 'view': False}
</code></pre>
<p>但由于这两次循环遍历词汇表,这样做更有效:</p>
<pre><code>>>> sentence = word_tokenize('I love this sandwich.'.lower())
>>> x = {i:(i in sentence) for i in vocabulary}
{'love': True, 'deal': False, 'tired': False, 'feel': False, 'is': False, 'am': False, 'an': False, 'good': False, 'best': False, '!': False, 'these': False, 'what': False, '.': True, 'amazing': False, 'horrible': False, 'sworn': False, 'ca': False, 'do': False, 'sandwich': True, 'very': False, 'boss': False, 'beers': False, 'not': False, 'with': False, 'he': False, 'enemy': False, 'about': False, 'like': False, 'restaurant': False, 'this': True, 'of': False, 'work': False, "n't": False, 'i': True, 'stuff': False, 'place': False, 'my': False, 'awesome': False, 'view': False}
</code></pre>
<p>所以对于每个句子,我们要告诉每个句子的分类器哪些单词存在,哪些单词不存在,并给它pos/neg标记。我们可以称之为a<code>feature_set</code>,它是由a<code>x</code>(如上图所示)及其标记组成的元组。</p>
<pre><code>>>> feature_set = [({i:(i in word_tokenize(sentence.lower())) for i in vocabulary},tag) for sentence, tag in training_data]
[({'this': True, 'love': True, 'deal': False, 'tired': False, 'feel': False, 'is': False, 'am': False, 'an': False, 'sandwich': True, 'ca': False, 'best': False, '!': False, 'what': False, '.': True, 'amazing': False, 'horrible': False, 'sworn': False, 'awesome': False, 'do': False, 'good': False, 'very': False, 'boss': False, 'beers': False, 'not': False, 'with': False, 'he': False, 'enemy': False, 'about': False, 'like': False, 'restaurant': False, 'these': False, 'of': False, 'work': False, "n't": False, 'i': False, 'stuff': False, 'place': False, 'my': False, 'view': False}, 'pos'), ...]
</code></pre>
<p>然后,我们将特征集中的这些特征和标记输入分类器以对其进行训练:</p>
<pre><code>from nltk import NaiveBayesClassifier as nbc
classifier = nbc.train(feature_set)
</code></pre>
<p>现在你有了一个训练有素的分类器,当你想标记一个新的句子时,你必须对新的句子进行“特征化”以查看新句子中的哪个单词在分类器所训练的词汇中:</p>
<pre><code>>>> test_sentence = "This is the best band I've ever heard! foobar"
>>> featurized_test_sentence = {i:(i in word_tokenize(test_sentence.lower())) for i in vocabulary}
</code></pre>
<p><strong>注意:</strong>正如您从上面的步骤中看到的,naive bayes分类器无法处理词汇表外的单词,因为<code>foobar</code>标记在您对其进行特化后消失。</p>
<p>然后你把这个特殊的测试句子输入分类器,并要求它进行分类:</p>
<pre><code>>>> classifier.classify(featurized_test_sentence)
'pos'
</code></pre>
<p>希望这能更清楚地说明如何将数据输入NLTK的朴素bayes分类器进行情感分析。以下是没有注释和演练的完整代码:</p>
<pre><code>from nltk import NaiveBayesClassifier as nbc
from nltk.tokenize import word_tokenize
from itertools import chain
training_data = [('I love this sandwich.', 'pos'),
('This is an amazing place!', 'pos'),
('I feel very good about these beers.', 'pos'),
('This is my best work.', 'pos'),
("What an awesome view", 'pos'),
('I do not like this restaurant', 'neg'),
('I am tired of this stuff.', 'neg'),
("I can't deal with this", 'neg'),
('He is my sworn enemy!', 'neg'),
('My boss is horrible.', 'neg')]
vocabulary = set(chain(*[word_tokenize(i[0].lower()) for i in training_data]))
feature_set = [({i:(i in word_tokenize(sentence.lower())) for i in vocabulary},tag) for sentence, tag in training_data]
classifier = nbc.train(feature_set)
test_sentence = "This is the best band I've ever heard!"
featurized_test_sentence = {i:(i in word_tokenize(test_sentence.lower())) for i in vocabulary}
print "test_sent:",test_sentence
print "tag:",classifier.classify(featurized_test_sentence)
</code></pre>