语料库级bleu与句子级bleu s

2024-10-02 02:25:29 发布

您现在位置:Python中文网/ 问答频道 /正文

我在python中导入了nltk来计算Ubuntu上的BLEU分数。我理解句子级BLEU分数是如何工作的,但我不理解语料库级BLEU分数是如何工作的。

以下是我的语料库级BLEU分数代码:

import nltk

hypothesis = ['This', 'is', 'cat'] 
reference = ['This', 'is', 'a', 'cat']
BLEUscore = nltk.translate.bleu_score.corpus_bleu([reference], [hypothesis], weights = [1])
print(BLEUscore)

出于某种原因,上述代码的bleu分数为0。我期待着一个语料库水平的BLEU分数至少为0.5。

这是我的句子级BLEU分数代码

import nltk

hypothesis = ['This', 'is', 'cat'] 
reference = ['This', 'is', 'a', 'cat']
BLEUscore = nltk.translate.bleu_score.sentence_bleu([reference], hypothesis, weights = [1])
print(BLEUscore)

这里的句子水平BLEU分数是0.71,这是我所期望的,考虑到简短的惩罚和缺少单词“a”。但是,我不明白语料库级别的BLEU评分是如何工作的。

任何帮助都将不胜感激。


Tags: 代码importisthis分数translatecat句子
2条回答

TL;DR

>>> import nltk
>>> hypothesis = ['This', 'is', 'cat'] 
>>> reference = ['This', 'is', 'a', 'cat']
>>> references = [reference] # list of references for 1 sentence.
>>> list_of_references = [references] # list of references for all sentences in corpus.
>>> list_of_hypotheses = [hypothesis] # list of hypotheses that corresponds to list of references.
>>> nltk.translate.bleu_score.corpus_bleu(list_of_references, list_of_hypotheses)
0.6025286104785453
>>> nltk.translate.bleu_score.sentence_bleu(references, hypothesis)
0.6025286104785453

(注意:为了获得BLEU score实现的稳定版本,必须在develop分支上提取最新版本的NLTK)


在Long中:

实际上,如果整个语料库中只有一个引用和一个假设,那么corpus_bleu()sentence_bleu()应该返回与上面示例中相同的值。

在代码中,我们看到^{} is actually a duck-type of ^{}

def sentence_bleu(references, hypothesis, weights=(0.25, 0.25, 0.25, 0.25),
                  smoothing_function=None):
    return corpus_bleu([references], [hypothesis], weights, smoothing_function)

如果我们看看sentence_bleu的参数:

 def sentence_bleu(references, hypothesis, weights=(0.25, 0.25, 0.25, 0.25),
                      smoothing_function=None):
    """"
    :param references: reference sentences
    :type references: list(list(str))
    :param hypothesis: a hypothesis sentence
    :type hypothesis: list(str)
    :param weights: weights for unigrams, bigrams, trigrams and so on
    :type weights: list(float)
    :return: The sentence-level BLEU score.
    :rtype: float
    """

sentence_bleu引用的输入是list(list(str))

因此,如果你有一个句子字符串,例如"This is a cat",你必须对它进行标记化才能得到一个字符串列表["This", "is", "a", "cat"],并且由于它允许多个引用,因此它必须是一个字符串列表,例如,如果你有第二个引用,“这是一只猫”,你对sentence_bleu()的输入是:

references = [ ["This", "is", "a", "cat"], ["This", "is", "a", "feline"] ]
hypothesis = ["This", "is", "cat"]
sentence_bleu(references, hypothesis)

当谈到corpus_bleu()list_of_references参数时,它基本上是a list of whatever the ^{} takes as references

def corpus_bleu(list_of_references, hypotheses, weights=(0.25, 0.25, 0.25, 0.25),
                smoothing_function=None):
    """
    :param references: a corpus of lists of reference sentences, w.r.t. hypotheses
    :type references: list(list(list(str)))
    :param hypotheses: a list of hypothesis sentences
    :type hypotheses: list(list(str))
    :param weights: weights for unigrams, bigrams, trigrams and so on
    :type weights: list(float)
    :return: The corpus-level BLEU score.
    :rtype: float
    """

除了查看^{}中的doctest之外,还可以查看^{}中的unittest,了解如何使用bleu_score.py中的每个组件。

顺便说一句,因为sentence_bleu作为bleu导入到(nltk.translate.__init__.py](https://github.com/nltk/nltk/blob/develop/nltk/translate/init.py#L21)中,使用

from nltk.translate import bleu 

与以下相同:

from nltk.translate.bleu_score import sentence_bleu

在代码中:

>>> from nltk.translate import bleu
>>> from nltk.translate.bleu_score import sentence_bleu
>>> from nltk.translate.bleu_score import corpus_bleu
>>> bleu == sentence_bleu
True
>>> bleu == corpus_bleu
False

我们来看看:

>>> help(nltk.translate.bleu_score.corpus_bleu)
Help on function corpus_bleu in module nltk.translate.bleu_score:

corpus_bleu(list_of_references, hypotheses, weights=(0.25, 0.25, 0.25, 0.25), smoothing_function=None)
    Calculate a single corpus-level BLEU score (aka. system-level BLEU) for all 
    the hypotheses and their respective references.  

    Instead of averaging the sentence level BLEU scores (i.e. marco-average 
    precision), the original BLEU metric (Papineni et al. 2002) accounts for 
    the micro-average precision (i.e. summing the numerators and denominators
    for each hypothesis-reference(s) pairs before the division).
    ...

你比我更能理解算法的描述,所以我不会试图“解释”给你听。如果docstring没有足够的清理,请查看the source本身。或者在本地找到它:

>>> nltk.translate.bleu_score.__file__
'.../lib/python3.4/site-packages/nltk/translate/bleu_score.py'

相关问题 更多 >

    热门问题