NLTK:我可以在已经生成的语法中添加终端吗

2024-09-25 06:31:30 发布

您现在位置:Python中文网/ 问答频道 /正文

我已经从atis语法生成了语法,现在我想添加一些自己的规则,特别是句子中的词尾,我可以这样做吗?在

import nltk
grammar = nltk.data.load('grammars/large_grammars/atis.cfg')

为了grammar我想添加更多的终端。在


Tags: import终端data规则语法loadcfg句子
1条回答
网友
1楼 · 发布于 2024-09-25 06:31:30

简而言之,:是的,这是可能的,但是你会经历很多痛苦,用atis.cfg作为基础重写CFG,然后再读取新的CFG文本文件,这样就更容易了。将每个新终端重新分配到正确的非终端以映射它们要容易得多


在long中,请参见以下内容

首先让我们看看NLTK中的CFG语法是什么,它包含什么:

>>> import nltk
>>> g = nltk.data.load('grammars/large_grammars/atis.cfg')
>>> dir(g)
['__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__unicode__', '__weakref__', '_all_unary_are_lexical', '_calculate_grammar_forms', '_calculate_indexes', '_calculate_leftcorners', '_categories', '_empty_index', '_immediate_leftcorner_categories', '_immediate_leftcorner_words', '_is_lexical', '_is_nonlexical', '_leftcorner_parents', '_leftcorner_words', '_leftcorners', '_lexical_index', '_lhs_index', '_max_len', '_min_len', '_productions', '_rhs_index', '_start', 'check_coverage', 'fromstring', 'is_binarised', 'is_chomsky_normal_form', 'is_flexible_chomsky_normal_form', 'is_leftcorner', 'is_lexical', 'is_nonempty', 'is_nonlexical', 'leftcorner_parents', 'leftcorners', 'max_len', 'min_len', 'productions', 'start', 'unicode_repr']

有关详细信息,请参见https://github.com/nltk/nltk/blob/develop/nltk/grammar.py#L421

似乎终端和非终端都是Production类型,参见https://github.com/nltk/nltk/blob/develop/nltk/grammar.py#L236,即

A grammar production. Each production maps a single symbol on the "left-hand side" to a sequence of symbols on the "right-hand side". (In the case of context-free productions, the left-hand side must be a Nonterminal, and the right-hand side is a sequence of terminals and Nonterminals.) "terminals" can be any immutable hashable object that is not a Nonterminal. Typically, terminals are strings representing words, such as "dog" or "under".

我们来看看语法是如何存储结果的:

^{2}$

现在,我们似乎可以创建nltk.grammar.Production对象并将它们附加到grammar._productions中。在

让我们试试原始语法:

>>> import nltk
>>> original_grammar = nltk.data.load('grammars/large_grammars/atis.cfg')
>>> original_parser = ChartParser(original_grammar)
>>> sent = ['show', 'me', 'northwest', 'flights', 'to', 'detroit', '.']
>>> for i in original_parser.parse(sent):
...     print i
...     break
... 
(SIGMA
  (IMPR_VB
    (VERB_VB (show show))
    (NP_PPO
      (pt_pron_ppo me)
      (NAPPOS_NP (NOUN_NP (northwest northwest))))
    (NP_NNS (NOUN_NNS (pt207 flights)) (PREP_IN (to to)))
    (AVPNP_NP (NOUN_NP (detroit detroit)))
    (pt_char_per .)))

原始语法没有结尾singapore

>>> sent = ['show', 'me', 'northwest', 'flights', 'to', 'singapore', '.']
>>> for i in original_parser.parse(sent):
...     print i
...     break
... 
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/nltk/parse/api.py", line 49, in parse
    return iter(self.parse_all(sent))
  File "/usr/local/lib/python2.7/dist-packages/nltk/parse/chart.py", line 1350, in parse_all
    chart = self.chart_parse(tokens)
  File "/usr/local/lib/python2.7/dist-packages/nltk/parse/chart.py", line 1309, in chart_parse
    self._grammar.check_coverage(tokens)
  File "/usr/local/lib/python2.7/dist-packages/nltk/grammar.py", line 631, in check_coverage
    "input words: %r." % missing)
ValueError: Grammar does not cover some of the input words: u"'singapore'".

在我们尝试将singapore添加到语法中之前,让我们看看detroit是如何存储在语法中的:

>>> original_grammar._rhs_index['detroit']
[detroit -> 'detroit']
>>> type(original_grammar._rhs_index['detroit'])
<type 'list'>
>>> type(original_grammar._rhs_index['detroit'][0])
<class 'nltk.grammar.Production'>
>>> original_grammar._rhs_index['detroit'][0]._lhs
detroit
>>> original_grammar._rhs_index['detroit'][0]._rhs
(u'detroit',)
>>> type(original_grammar._rhs_index['detroit'][0]._lhs)
<class 'nltk.grammar.Nonterminal'>
>>> type(original_grammar._rhs_index['detroit'][0]._rhs)
<type 'tuple'>
>>> original_grammar._rhs_index[original_grammar._rhs_index['detroit'][0]._lhs]
[NOUN_NP -> detroit, NOUN_NP -> detroit minneapolis toronto]

所以现在我们可以尝试为singapore重新创建相同的Production对象:

# First let's create Non-terminal for singapore.
>>> nltk.grammar.Nonterminal('singapore')
singapore
>>> lhs = nltk.grammar.Nonterminal('singapore')
>>> rhs = [u'singapore']
# Now we can create the Production for singapore.
>>> singapore_production = nltk.grammar.Production(lhs, rhs)
# Now let's try to add this Production the grammar's list of production
>>> new_grammar = nltk.data.load('grammars/large_grammars/atis.cfg')
>>> new_grammar._productions.append(singapore_production)

但它仍然不起作用,但由于提供终端本身并不能真正帮助将其与CFG的其他部分联系起来,因此新加坡仍然无法解析:

>>> new_grammar = nltk.data.load('grammars/large_grammars/atis.cfg')
>>> new_grammar._productions.append(singapore_production)
>>> new_parser = ChartParser(new_grammar)
>>> sent = ['show', 'me', 'northwest', 'flights', 'to', 'singapore', '.']
>>> new_parser.parse(sent)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/nltk/parse/api.py", line 49, in parse
    return iter(self.parse_all(sent))
  File "/usr/local/lib/python2.7/dist-packages/nltk/parse/chart.py", line 1350, in parse_all
    chart = self.chart_parse(tokens)
  File "/usr/local/lib/python2.7/dist-packages/nltk/parse/chart.py", line 1309, in chart_parse
    self._grammar.check_coverage(tokens)
  File "/usr/local/lib/python2.7/dist-packages/nltk/grammar.py", line 631, in check_coverage
    "input words: %r." % missing)
ValueError: Grammar does not cover some of the input words: u"'singapore'".

从下面我们知道,新加坡就像底特律,底特律通向左手边的左手边NOUN_NP -> detroit

>>> original_grammar._rhs_index[original_grammar._rhs_index['detroit'][0]._lhs]
[NOUN_NP -> detroit, NOUN_NP -> detroit minneapolis toronto]

因此,我们需要做的是为新加坡添加另一个产品,从而导致NOUN_NP非终端,或者将我们的新加坡lh附加到名词_NP nonterminals的右手边:

>>> lhs = nltk.grammar.Nonterminal('singapore')
>>> rhs = [u'singapore']
>>> singapore_production = nltk.grammar.Production(lhs, rhs)
>>> new_grammar._productions.append(singapore_production)

现在让我们为NOUN_NP -> singapore添加新产品:

lhs2 = nltk.grammar.Nonterminal('NOUN_NP')
new_grammar._productions.append(nltk.grammar.Production(lhs2, [lhs]))

现在我们应该希望解析器能够正常工作:

sent = ['show', 'me', 'northwest', 'flights', 'to', 'singapore', '.']
print new_grammar.productions()[2091]
print new_grammar.productions()[-1]
new_parser = nltk.ChartParser(new_grammar)
for i in new_parser.parse(sent):
    print i

[出来]:

Traceback (most recent call last):
  File "test.py", line 31, in <module>
    for i in new_parser.parse(sent):
  File "/usr/local/lib/python2.7/dist-packages/nltk/parse/api.py", line 49, in parse
    return iter(self.parse_all(sent))
  File "/usr/local/lib/python2.7/dist-packages/nltk/parse/chart.py", line 1350, in parse_all
    chart = self.chart_parse(tokens)
  File "/usr/local/lib/python2.7/dist-packages/nltk/parse/chart.py", line 1309, in chart_parse
    self._grammar.check_coverage(tokens)
  File "/usr/local/lib/python2.7/dist-packages/nltk/grammar.py", line 631, in check_coverage
    "input words: %r." % missing)
ValueError: Grammar does not cover some of the input words: u"'singapore'".

但是语法似乎仍然无法识别我们添加的新的终端和非终端,所以让我们尝试一下,将我们的新语法输出到字符串中,并从输出字符串创建一个新的语法:

import nltk

lhs = nltk.grammar.Nonterminal('singapore')
rhs = [u'singapore']
singapore_production = nltk.grammar.Production(lhs, rhs)
new_grammar = nltk.data.load('grammars/large_grammars/atis.cfg')
new_grammar._productions.append(singapore_production)    
lhs2 = nltk.grammar.Nonterminal('NOUN_NP')
new_grammar._productions.append(nltk.grammar.Production(lhs2, [lhs]))

# Create newer grammar from new_grammar's string
newer_grammar =  nltk.grammar.CFG.fromstring(str(new_grammar).split('\n')[1:])
# Reassign new_grammar's string to newer_grammar !!!
newer_grammar._start = new_grammar.start()
newer_grammar
sent = ['show', 'me', 'northwest', 'flights', 'to', 'singapore', '.']
print newer_grammar.productions()[2091]
print newer_grammar.productions()[-1]
newer_parser = nltk.ChartParser(newer_grammar)
for i in newer_parser.parse(sent):
    print i
    break

[出来]:

^{14}$

相关问题 更多 >