ReedSolomon解码勘误表(擦除+错误)

2024-06-28 15:27:38 发布

您现在位置:Python中文网/ 问答频道 /正文

我试图用Python实现一个Reed-Solomon编码器解码器,它支持对擦除和错误的解码,这让我抓狂。在

该实现目前只支持对错误或擦除进行解码,但不支持同时解码这两种情况(即使它低于2*errors+erauses<;=(n-k))的理论界限。在

从Blahut的论文(herehere)来看,我们似乎只需要用擦除定位多项式初始化错误定位器多项式,就可以隐式地计算Berlekamp Massey中的勘误定位多项式。在

这种方法部分适用于我:当我有2*errors+erasures<;(n-k)/2时,它是有效的,但实际上在调试之后,它只起作用,因为BM计算了一个错误定位器多项式,该多项式的值与擦除定位器多项式的值完全相同(因为我们低于仅纠错的限值),因此它被galois域截断,我们得到了擦除定位多项式的正确值(至少我是这么理解的,我可能错了)。在

然而,当我们超过(n-k)/2时,例如,如果n=20和k=11,那么我们有(n-k)=9个被删除的符号,我们可以纠正,如果我们输入5个擦除,那么BM就出错了。如果我们输入4个擦除+1个错误(由于有2个*错误+擦除=2+4=6<;9,我们仍然远远低于界限),BM仍然会出错。在

我实现的Berlekamp Massey的精确算法可以在this presentation(第15-17页)中找到,但是可以找到非常相似的描述here和{a5},在这里我附上一份数学描述的副本:

Berlekamp-Massey algorithm

现在,我已经将这个数学算法几乎精确地复制到Python代码中。我想扩展它以支持擦除,我尝试用擦除定位器初始化错误定位器sigma:

def _berlekamp_massey(self, s, k=None, erasures_loc=None):
    '''Computes and returns the error locator polynomial (sigma) and the
    error evaluator polynomial (omega).
    If the erasures locator is specified, we will return an errors-and-erasures locator polynomial and an errors-and-erasures evaluator polynomial.
    The parameter s is the syndrome polynomial (syndromes encoded in a
    generator function) as returned by _syndromes. Don't be confused with
    the other s = (n-k)/2

    Notes:
    The error polynomial:
    E(x) = E_0 + E_1 x + ... + E_(n-1) x^(n-1)

    j_1, j_2, ..., j_s are the error positions. (There are at most s
    errors)

    Error location X_i is defined: X_i = a^(j_i)
    that is, the power of a corresponding to the error location

    Error magnitude Y_i is defined: E_(j_i)
    that is, the coefficient in the error polynomial at position j_i

    Error locator polynomial:
    sigma(z) = Product( 1 - X_i * z, i=1..s )
    roots are the reciprocals of the error locations
    ( 1/X_1, 1/X_2, ...)

    Error evaluator polynomial omega(z) is here computed at the same time as sigma, but it can also be constructed afterwards using the syndrome and sigma (see _find_error_evaluator() method).
    '''
    # For errors-and-erasures decoding, see: Blahut, Richard E. "Transform techniques for error control codes." IBM Journal of Research and development 23.3 (1979): 299-315. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.92.600&rep=rep1&type=pdf and also a MatLab implementation here: http://www.mathworks.com/matlabcentral/fileexchange/23567-reed-solomon-errors-and-erasures-decoder/content//RS_E_E_DEC.m
    # also see: Blahut, Richard E. "A universal Reed-Solomon decoder." IBM Journal of Research and Development 28.2 (1984): 150-158. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.84.2084&rep=rep1&type=pdf
    # or alternatively see the reference book by Blahut: Blahut, Richard E. Theory and practice of error control codes. Addison-Wesley, 1983.
    # and another good alternative book with concrete programming examples: Jiang, Yuan. A practical guide to error-control coding using Matlab. Artech House, 2010.
    n = self.n
    if not k: k = self.k

    # Initialize:
    if erasures_loc:
        sigma = [ Polynomial(erasures_loc.coefficients) ] # copy erasures_loc by creating a new Polynomial
        B = [ Polynomial(erasures_loc.coefficients) ]
    else:
        sigma =  [ Polynomial([GF256int(1)]) ] # error locator polynomial. Also called Lambda in other notations.
        B =    [ Polynomial([GF256int(1)]) ] # this is the error locator support/secondary polynomial, which is a funky way to say that it's just a temporary variable that will help us construct sigma, the error locator polynomial
    omega =  [ Polynomial([GF256int(1)]) ] # error evaluator polynomial. We don't need to initialize it with erasures_loc, it will still work, because Delta is computed using sigma, which itself is correctly initialized with erasures if needed.
    A =  [ Polynomial([GF256int(0)]) ] # this is the error evaluator support/secondary polynomial, to help us construct omega
    L =      [ 0 ] # necessary variable to check bounds (to avoid wrongly eliminating the higher order terms). For more infos, see https://www.cs.duke.edu/courses/spring11/cps296.3/decoding_rs.pdf
    M =      [ 0 ] # optional variable to check bounds (so that we do not mistakenly overwrite the higher order terms). This is not necessary, it's only an additional safe check. For more infos, see the presentation decoding_rs.pdf by Andrew Brown in the doc folder.

    # Polynomial constants:
    ONE = Polynomial(z0=GF256int(1))
    ZERO = Polynomial(z0=GF256int(0))
    Z = Polynomial(z1=GF256int(1)) # used to shift polynomials, simply multiply your poly * Z to shift

    s2 = ONE + s

    # Iteratively compute the polynomials 2s times. The last ones will be
    # correct
    for l in xrange(0, n-k):
        K = l+1
        # Goal for each iteration: Compute sigma[K] and omega[K] such that
        # (1 + s)*sigma[l] == omega[l] in mod z^(K)

        # For this particular loop iteration, we have sigma[l] and omega[l],
        # and are computing sigma[K] and omega[K]

        # First find Delta, the non-zero coefficient of z^(K) in
        # (1 + s) * sigma[l]
        # This delta is valid for l (this iteration) only
        Delta = ( s2 * sigma[l] ).get_coefficient(l+1) # Delta is also known as the Discrepancy, and is always a scalar (not a polynomial).
        # Make it a polynomial of degree 0, just for ease of computation with polynomials sigma and omega.
        Delta = Polynomial(x0=Delta)

        # Can now compute sigma[K] and omega[K] from
        # sigma[l], omega[l], B[l], A[l], and Delta
        sigma.append( sigma[l] - Delta * Z * B[l] )
        omega.append( omega[l] - Delta * Z * A[l] )

        # Now compute the next B and A
        # There are two ways to do this
        # This is based on a messy case analysis on the degrees of the four polynomials sigma, omega, A and B in order to minimize the degrees of A and B. For more infos, see https://www.cs.duke.edu/courses/spring10/cps296.3/decoding_rs_scribe.pdf
        # In fact it ensures that the degree of the final polynomials aren't too large.
        if Delta == ZERO or 2*L[l] > K \
            or (2*L[l] == K and M[l] == 0):
            # Rule A
            B.append( Z * B[l] )
            A.append( Z * A[l] )
            L.append( L[l] )
            M.append( M[l] )

        elif (Delta != ZERO and 2*L[l] < K) \
            or (2*L[l] == K and M[l] != 0):
            # Rule B
            B.append( sigma[l] // Delta )
            A.append( omega[l] // Delta )
            L.append( K - L[l] )
            M.append( 1 - M[l] )

        else:
            raise Exception("Code shouldn't have gotten here")

    return sigma[-1], omega[-1]

多项式和GF256int分别是2^8以上多项式和galois域的泛型实现。这些类都是经过单元测试的,而且通常是防虫的。Reed-Solomon的其他编码/解码方法也一样,如Forney和Chien search。我在这里讨论的问题的完整代码和快速测试用例可以在这里找到:http://codepad.org/l2Qi0y8o

以下是输出示例:

^{pr2}$

在这里,擦除解码总是正确的,因为它根本不使用BM来计算擦除定位器。通常,其他两个测试用例应该输出相同的sigma,但是它们根本不输出

当你比较前两个测试用例时,这个问题来自BM的事实是显而易见的:syndrome和erasure locator是相同的,但是得到的sigma是完全不同的(在第二个测试中,使用BM,而在第一个使用擦除的测试用例中,只调用BM)。在

非常感谢您的任何帮助或任何关于我如何调试这个的想法。请注意,您的答案可以是数学或代码,但请解释我的方法出了什么问题。在

/EDIT:仍然没有找到如何正确实现勘误表BM解码器(请参阅下面我的答案)。奖金是提供给任何人谁可以解决这个问题(或至少指导我找到解决办法)。在

/EDIT2:我真傻,抱歉,我只是重新阅读了模式,发现我错过了作业L = r - L - erasures_count中的更改。。。我已经更新了代码来修复这个问题,并重新接受了我的答案。在


Tags: andofthetoiserrorsigmabm
3条回答

这是对gaborous的评论的回应。它没有展示如何修改berlekampmassey来处理擦除。相反,它展示了生成修正(Forney)综合征的替代方案,该方案消除了从综合征中删除的信息,然后将修正后的综合征与任何标准的错误解码算法一起使用来生成错误定位多项式。然后将误差定位多项式和误差定位多项式组合(相乘)以创建勘误定位多项式。

这种方法并不是最佳的,因为有一些方法可以增强通用解码器来处理擦除和错误,但更通用。

示例遗留代码,用于在C中生成修改后的症状。此代码中的“向量”包括大小和数组。vErsf是一个与数据(码字)大小相同的数组,0表示非擦除位置,1表示擦除位置。删除被转换成删除定位多项式(Root2Poly),然后用于将标准症状转换为修正(Forney)综合征。

typedef unsigned char ELEM;             /* element type */

typedef struct{                         /* vector structure */
    ELEM  size;
    ELEM  data[255];
}VECTOR;

static VECTOR   vData;                  /* data */
static VECTOR   vErsf;                  /* erasure flags (same size as data) */
static VECTOR   vSyndromes;             /* syndromes */
static VECTOR   vErsLocators;           /* erasure locators */
static VECTOR   pErasures;              /* erasure poly */
static VECTOR   vModSyndromes;          /* modified syndromes */

/*                                   */
/*      Root2Poly(pPDst, pVSrc)         convert roots into polynomial   */
/*                                   */
static void Root2Poly(VECTOR *pPDst, VECTOR *pVSrc)
{
int i, j;

    pPDst->size = pVSrc->size+1;
    pPDst->data[0] = 1;
    memset(&pPDst->data[1], 0, pVSrc->size*sizeof(ELEM));
    for(j = 0; j < pVSrc->size; j++){
        for(i = j; i >= 0; i ){
            pPDst->data[i+1] = GFSub(pPDst->data[i+1],
                    GFMpy(pPDst->data[i], pVSrc->data[j]));}}
}

/*                                   */
/*      GenErasures     generate vErsLocat...and pErasures              */
/*                                                                      */
/*      Scan vErsf right to left; for each non-zero flag byte,          */
/*      set vErsLocators to Alpha**power of corresponding location.     */
/*      Then convert these locators into a polynomial.                  */
/*                                   */
static int GenErasures(void)
{
int     i, j;
ELEM    eLcr;                           /* locator */

    j = 0;                              /* j = index into vErs... */
    eLcr = 1;                           /* init locator */
    for(i = vErsf.size; i;){
        i ;
        if(vErsf.data[i]){              /* if byte flagged */
            vErsLocators.data[j] = eLcr; /*     set locator */
            j++;
            if(j > vGenRoots.size){      /*    exit if too many */
                return(1);}}
        eLcr = GFMpy(eLcr, eAlpha);}     /* bump locator */

    vErsLocators.size = j;              /* set size */

    Root2Poly(&pErasures, &vErsLocators); /* generate poly */

    return(0);
}

/*                                   */
/*      GenModSyndromes         Generate vModSyndromes                  */
/*                                                                      */
/*      generate modified syndromes                                     */
/*                                   */
static void     GenModSyndromes(void)
{
int     iM;                             /* vModSyn.. index */
int     iS;                             /* vSyn.. index */
int     iP;                             /* pErs.. index */

    vModSyndromes.size = vSyndromes.size-vErsLocators.size;

    if(vErsLocators.size == 0){         /* if no erasures, copy */
        memcpy(vModSyndromes.data, vSyndromes.data, 
               vSyndromes.size*sizeof(ELEM));
        return;}

    iS = 0;
    for(iM = 0; iM < vModSyndromes.size; iM++){ /* modify */
        vModSyndromes.data[iM] = 0;
        iP = pErasures.size;
        while(iP){
            iP ;
            vModSyndromes.data[iM] = GFAdd(vModSyndromes.data[iM],
                GFMpy(vSyndromes.data[iS], pErasures.data[iP]));
            iS++;}
        iS -= pErasures.size-1;}
}

在阅读了大量的研究论文和书籍之后,我唯一找到答案的地方是书(readable online on Google Books,但不是PDF格式):

"Algebraic codes for data transmission", Blahut, Richard E., 2003, Cambridge university press.

以下是本书的一些摘录,其中详细描述了我实现的Berlekamp-Massey算法(多项式运算的矩阵化/矢量化表示除外):

{1美元^

以下是Reed Solomon的Berlekamp-Massey算法:

Errors-and-erasures Berlekamp-Massey algorithm for Reed-Solomon

如您所见,与通常的描述相反,您只需使用先前计算的擦除定位器多项式的值初始化Lambda,错误定位器多项式,您还需要跳过第一个v迭代,其中v是擦除次数。注意,这并不等同于跳过最后的v个迭代:您需要跳过前v个迭代,因为r(我的实现中的迭代计数器K)不仅用于计算迭代次数,而且还用于生成正确的差异因子Delta。

以下是修改后的结果代码,以支持最多v+2*e <= (n-k)的擦除和错误:

def _berlekamp_massey(self, s, k=None, erasures_loc=None, erasures_eval=None, erasures_count=0):
    '''Computes and returns the errata (errors+erasures) locator polynomial (sigma) and the
    error evaluator polynomial (omega) at the same time.
    If the erasures locator is specified, we will return an errors-and-erasures locator polynomial and an errors-and-erasures evaluator polynomial, else it will compute only errors. With erasures in addition to errors, it can simultaneously decode up to v+2e <= (n-k) where v is the number of erasures and e the number of errors.
    Mathematically speaking, this is equivalent to a spectral analysis (see Blahut, "Algebraic Codes for Data Transmission", 2003, chapter 7.6 Decoding in Time Domain).
    The parameter s is the syndrome polynomial (syndromes encoded in a
    generator function) as returned by _syndromes.

    Notes:
    The error polynomial:
    E(x) = E_0 + E_1 x + ... + E_(n-1) x^(n-1)

    j_1, j_2, ..., j_s are the error positions. (There are at most s
    errors)

    Error location X_i is defined: X_i = α^(j_i)
    that is, the power of α (alpha) corresponding to the error location

    Error magnitude Y_i is defined: E_(j_i)
    that is, the coefficient in the error polynomial at position j_i

    Error locator polynomial:
    sigma(z) = Product( 1 - X_i * z, i=1..s )
    roots are the reciprocals of the error locations
    ( 1/X_1, 1/X_2, ...)

    Error evaluator polynomial omega(z) is here computed at the same time as sigma, but it can also be constructed afterwards using the syndrome and sigma (see _find_error_evaluator() method).

    It can be seen that the algorithm tries to iteratively solve for the error locator polynomial by
    solving one equation after another and updating the error locator polynomial. If it turns out that it
    cannot solve the equation at some step, then it computes the error and weights it by the last
    non-zero discriminant found, and delays the weighted result to increase the polynomial degree
    by 1. Ref: "Reed Solomon Decoder: TMS320C64x Implementation" by Jagadeesh Sankaran, December 2000, Application Report SPRA686

    The best paper I found describing the BM algorithm for errata (errors-and-erasures) evaluator computation is in "Algebraic Codes for Data Transmission", Richard E. Blahut, 2003.
    '''
    # For errors-and-erasures decoding, see: "Algebraic Codes for Data Transmission", Richard E. Blahut, 2003 and (but it's less complete): Blahut, Richard E. "Transform techniques for error control codes." IBM Journal of Research and development 23.3 (1979): 299-315. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.92.600&rep=rep1&type=pdf and also a MatLab implementation here: http://www.mathworks.com/matlabcentral/fileexchange/23567-reed-solomon-errors-and-erasures-decoder/content//RS_E_E_DEC.m
    # also see: Blahut, Richard E. "A universal Reed-Solomon decoder." IBM Journal of Research and Development 28.2 (1984): 150-158. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.84.2084&rep=rep1&type=pdf
    # and another good alternative book with concrete programming examples: Jiang, Yuan. A practical guide to error-control coding using Matlab. Artech House, 2010.
    n = self.n
    if not k: k = self.k

    # Initialize, depending on if we include erasures or not:
    if erasures_loc:
        sigma = [ Polynomial(erasures_loc.coefficients) ] # copy erasures_loc by creating a new Polynomial, so that we initialize the errata locator polynomial with the erasures locator polynomial.
        B = [ Polynomial(erasures_loc.coefficients) ]
        omega =  [ Polynomial(erasures_eval.coefficients) ] # to compute omega (the evaluator polynomial) at the same time, we also need to initialize it with the partial erasures evaluator polynomial
        A =  [ Polynomial(erasures_eval.coefficients) ] # TODO: fix the initial value of the evaluator support polynomial, because currently the final omega is not correct (it contains higher order terms that should be removed by the end of BM)
    else:
        sigma =  [ Polynomial([GF256int(1)]) ] # error locator polynomial. Also called Lambda in other notations.
        B =    [ Polynomial([GF256int(1)]) ] # this is the error locator support/secondary polynomial, which is a funky way to say that it's just a temporary variable that will help us construct sigma, the error locator polynomial
        omega =  [ Polynomial([GF256int(1)]) ] # error evaluator polynomial. We don't need to initialize it with erasures_loc, it will still work, because Delta is computed using sigma, which itself is correctly initialized with erasures if needed.
        A =  [ Polynomial([GF256int(0)]) ] # this is the error evaluator support/secondary polynomial, to help us construct omega
    L = [ 0 ] # update flag: necessary variable to check when updating is necessary and to check bounds (to avoid wrongly eliminating the higher order terms). For more infos, see https://www.cs.duke.edu/courses/spring11/cps296.3/decoding_rs.pdf
    M = [ 0 ] # optional variable to check bounds (so that we do not mistakenly overwrite the higher order terms). This is not necessary, it's only an additional safe check. For more infos, see the presentation decoding_rs.pdf by Andrew Brown in the doc folder.

    # Fix the syndrome shifting: when computing the syndrome, some implementations may prepend a 0 coefficient for the lowest degree term (the constant). This is a case of syndrome shifting, thus the syndrome will be bigger than the number of ecc symbols (I don't know what purpose serves this shifting). If that's the case, then we need to account for the syndrome shifting when we use the syndrome such as inside BM, by skipping those prepended coefficients.
    # Another way to detect the shifting is to detect the 0 coefficients: by definition, a syndrome does not contain any 0 coefficient (except if there are no errors/erasures, in this case they are all 0). This however doesn't work with the modified Forney syndrome (that we do not use in this lib but it may be implemented in the future), which set to 0 the coefficients corresponding to erasures, leaving only the coefficients corresponding to errors.
    synd_shift = 0
    if len(s) > (n-k): synd_shift = len(s) - (n-k)

    # Polynomial constants:
    ONE = Polynomial(z0=GF256int(1))
    ZERO = Polynomial(z0=GF256int(0))
    Z = Polynomial(z1=GF256int(1)) # used to shift polynomials, simply multiply your poly * Z to shift

    # Precaching
    s2 = ONE + s

    # Iteratively compute the polynomials n-k-erasures_count times. The last ones will be correct (since the algorithm refines the error/errata locator polynomial iteratively depending on the discrepancy, which is kind of a difference-from-correctness measure).
    for l in xrange(0, n-k-erasures_count): # skip the first erasures_count iterations because we already computed the partial errata locator polynomial (by initializing with the erasures locator polynomial)
        K = erasures_count+l+synd_shift # skip the FIRST erasures_count iterations (not the last iterations, that's very important!)

        # Goal for each iteration: Compute sigma[l+1] and omega[l+1] such that
        # (1 + s)*sigma[l] == omega[l] in mod z^(K)

        # For this particular loop iteration, we have sigma[l] and omega[l],
        # and are computing sigma[l+1] and omega[l+1]

        # First find Delta, the non-zero coefficient of z^(K) in
        # (1 + s) * sigma[l]
        # Note that adding 1 to the syndrome s is not really necessary, you can do as well without.
        # This delta is valid for l (this iteration) only
        Delta = ( s2 * sigma[l] ).get_coefficient(K) # Delta is also known as the Discrepancy, and is always a scalar (not a polynomial).
        # Make it a polynomial of degree 0, just for ease of computation with polynomials sigma and omega.
        Delta = Polynomial(x0=Delta)

        # Can now compute sigma[l+1] and omega[l+1] from
        # sigma[l], omega[l], B[l], A[l], and Delta
        sigma.append( sigma[l] - Delta * Z * B[l] )
        omega.append( omega[l] - Delta * Z * A[l] )

        # Now compute the next support polynomials B and A
        # There are two ways to do this
        # This is based on a messy case analysis on the degrees of the four polynomials sigma, omega, A and B in order to minimize the degrees of A and B. For more infos, see https://www.cs.duke.edu/courses/spring10/cps296.3/decoding_rs_scribe.pdf
        # In fact it ensures that the degree of the final polynomials aren't too large.
        if Delta == ZERO or 2*L[l] > K+erasures_count \
            or (2*L[l] == K+erasures_count and M[l] == 0):
        #if Delta == ZERO or len(sigma[l+1]) <= len(sigma[l]): # another way to compute when to update, and it doesn't require to maintain the update flag L
            # Rule A
            B.append( Z * B[l] )
            A.append( Z * A[l] )
            L.append( L[l] )
            M.append( M[l] )

        elif (Delta != ZERO and 2*L[l] < K+erasures_count) \
            or (2*L[l] == K+erasures_count and M[l] != 0):
        # elif Delta != ZERO and len(sigma[l+1]) > len(sigma[l]): # another way to compute when to update, and it doesn't require to maintain the update flag L
            # Rule B
            B.append( sigma[l] // Delta )
            A.append( omega[l] // Delta )
            L.append( K - L[l] ) # the update flag L is tricky: in Blahut's schema, it's mandatory to use `L = K - L - erasures_count` (and indeed in a previous draft of this function, if you forgot to do `- erasures_count` it would lead to correcting only 2*(errors+erasures) <= (n-k) instead of 2*errors+erasures <= (n-k)), but in this latest draft, this will lead to a wrong decoding in some cases where it should correctly decode! Thus you should try with and without `- erasures_count` to update L on your own implementation and see which one works OK without producing wrong decoding failures.
            M.append( 1 - M[l] )

        else:
            raise Exception("Code shouldn't have gotten here")

    # Hack to fix the simultaneous computation of omega, the errata evaluator polynomial: because A (the errata evaluator support polynomial) is not correctly initialized (I could not find any info in academic papers). So at the end, we get the correct errata evaluator polynomial omega + some higher order terms that should not be present, but since we know that sigma is always correct and the maximum degree should be the same as omega, we can fix omega by truncating too high order terms.
    if omega[-1].degree > sigma[-1].degree: omega[-1] = Polynomial(omega[-1].coefficients[-(sigma[-1].degree+1):])

    # Return the last result of the iterations (since BM compute iteratively, the last iteration being correct - it may already be before, but we're not sure)
    return sigma[-1], omega[-1]

def _find_erasures_locator(self, erasures_pos):
    '''Compute the erasures locator polynomial from the erasures positions (the positions must be relative to the x coefficient, eg: "hello worldxxxxxxxxx" is tampered to "h_ll_ worldxxxxxxxxx" with xxxxxxxxx being the ecc of length n-k=9, here the string positions are [1, 4], but the coefficients are reversed since the ecc characters are placed as the first coefficients of the polynomial, thus the coefficients of the erased characters are n-1 - [1, 4] = [18, 15] = erasures_loc to be specified as an argument.'''
    # See: http://ocw.usu.edu/Electrical_and_Computer_Engineering/Error_Control_Coding/lecture7.pdf and Blahut, Richard E. "Transform techniques for error control codes." IBM Journal of Research and development 23.3 (1979): 299-315. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.92.600&rep=rep1&type=pdf and also a MatLab implementation here: http://www.mathworks.com/matlabcentral/fileexchange/23567-reed-solomon-errors-and-erasures-decoder/content//RS_E_E_DEC.m
    erasures_loc = Polynomial([GF256int(1)]) # just to init because we will multiply, so it must be 1 so that the multiplication starts correctly without nulling any term
    # erasures_loc is very simple to compute: erasures_loc = prod(1 - x*alpha[j]**i) for i in erasures_pos and where alpha is the alpha chosen to evaluate polynomials (here in this library it's gf(3)). To generate c*x where c is a constant, we simply generate a Polynomial([c, 0]) where 0 is the constant and c is positionned to be the coefficient for x^1. See https://en.wikipedia.org/wiki/Forney_algorithm#Erasures
    for i in erasures_pos:
        erasures_loc = erasures_loc * (Polynomial([GF256int(1)]) - Polynomial([GF256int(self.generator)**i, 0]))
    return erasures_loc

注意:Sigma、Omega、A、B、L和M都是多项式的列表(因此我们保留每次迭代计算的所有中间多项式的完整历史)。这当然可以优化,因为我们只需要Sigma[l]Sigma[l-1]Omega[l]Omega[l-1]A[l]B[l]L[l]和{}(因此,只有Sigma和Omega需要将之前的迭代保存在内存中,其他变量不需要)。

注意2:update标志L很棘手:在某些实现中,像Blahut的模式一样操作会导致解码时出现错误的失败。在我以前的实现中,必须使用L = K - L - erasures_count来正确解码错误和擦除,直到达到单例边界,但在我的最新实现中,我不得不使用L = K - L(即使存在擦除)来避免错误的解码失败。您应该在自己的实现中尝试这两种方法,看看哪一种不会产生任何错误的解码失败。更多信息,请参阅下面的问题。

这个算法的唯一问题是,它没有描述如何同时计算欧米茄,错误评估器多项式(这本书描述了如何只针对错误初始化欧米茄,而不是在解码错误和擦除时)。我尝试了几个变化和上述工作,但不是完全:最后,欧米茄将包括更高阶的条款,应该取消。可能是欧米茄或一个错误计算器支持多项式,没有用好的值初始化。

但是,您可以通过修剪太高阶项的ω多项式来解决这个问题(因为它应该与Lambda/Sigma具有相同的阶数):

^{pr2}$

或者,您可以使用勘误表定位器Lambda/Sigma在BM之后从头开始计算Omega,它总是正确计算的:

def _find_error_evaluator(self, synd, sigma, k=None):
    '''Compute the error (or erasures if you supply sigma=erasures locator polynomial) evaluator polynomial Omega from the syndrome and the error/erasures/errata locator Sigma. Omega is already computed at the same time as Sigma inside the Berlekamp-Massey implemented above, but in case you modify Sigma, you can recompute Omega afterwards using this method, or just ensure that Omega computed by BM is correct given Sigma (as long as syndrome and sigma are correct, omega will be correct).'''
    n = self.n
    if not k: k = self.k

    # Omega(x) = [ Synd(x) * Error_loc(x) ] mod x^(n-k+1)   From Blahut, Algebraic codes for data transmission, 2003
    return (synd * sigma) % Polynomial([GF256int(1)] + [GF256int(0)] * (n-k+1)) # Note that you should NOT do (1+Synd(x)) as can be seen in some books because this won't work with all primitive generators.

我在following question on CSTheory中寻找更好的解决方案。

/编辑:我将描述我遇到的一些问题以及如何解决这些问题:

  • 别忘了用擦除定位多项式初始化错误定位器多项式(可以很容易地从综合症和擦除位置计算)。在
  • 如果您只能对错误进行解码,并且只能完美地删除,但仅限于2*errors + erasures <= (n-k)/2,那么您就忘记了跳过第一个v次迭代。在
  • 如果您可以解码删除和错误,但最多2*(errors+erasures) <= (n-k),那么您忘记更新L:L = i+1 - L - erasures_count的赋值,而不是L = i+1 - L。但在某些情况下,这可能会使解码器失败,这取决于您如何实现解码器,请参阅下一点。在
  • 我的第一个解码器仅限于一个generator/prime polyminary/fcr,但当我将其更新为通用的并添加了严格的单元测试时,解码器在不应该的情况下失败了上面Blahut的模式对于L(更新标志)似乎是错误的:它必须使用L = K - L而不是{}进行更新,因为这有时会导致解码器失败,即使我们处于单例约束之下(因此我们应该正确解码!)。这似乎得到了这样一个事实的证实:计算L = K - L不仅可以解决这些解码问题,而且它还将给出与不使用更新标志L(即条件if Delta == ZERO or len(sigma[l+1]) <= len(sigma[l]):)的替代更新方法完全相同的结果。但这很奇怪:在我以前的实现中,L = K - L - erasures_count对于错误和擦除解码是强制的,但是现在它似乎产生了错误的失败。因此,您应该尝试使用或不使用您自己的实现,以及是否其中一个会为您产生错误的失败。在
  • 注意,条件2*L[l] > K变为2*L[l] > K+erasures_count。如果不首先添加条件+erasures_count,您可能不会注意到任何副作用,但在某些情况下,解码将失败,而不应该这样做
  • 如果只能修复一个错误或擦除,请检查您的条件是2*L[l] > K+erasures_count,而不是{}(注意,>而不是{})。在
  • 如果你能纠正2*errors + erasures <= (n-k-2)(刚好低于限制,例如,如果你有10个ecc符号,你只能纠正4个错误,而不是正常情况下的5个错误),那么检查你的症候群和BM算法中的循环:如果这个症候群以常数项x^0的系数0开始(有时在书中建议),那么你的症候群就会转移,然后BM中的循环必须从1开始,在n-k+1结束,而不是{},如果不移位的话。在
  • 如果您可以更正除最后一个符号(最后一个ecc符号)之外的所有符号,那么请检查您的范围,特别是在您的简氏搜索中:您不应该计算从alpha^0到alpha^255的错误定位器多项式,而应该从alpha^1到alpha^256求值。在

我引用了您的python代码,并由C++重新编写。

它是有效的,您的信息和示例代码非常有用。

我发现错误的失败可能是由M值引起的。

According to "Algebraic codes for data transmission", the M value should not be a member of if-else case.

删除M后,我没有遇到任何错误的失败。(或者只是还没有失败)

非常感谢你的知识分享。

    // calculate C
    Ref<ModulusPoly> T = C;

    // M just for shift x
    ArrayRef<int> m_temp(2);
    m_temp[0]=1;
    m_poly = new ModulusPoly(field_, m_temp);

    // C = C - d*B*x
    ArrayRef<int> d_temp(1);
    d_temp[0] = d;
    Ref<ModulusPoly> d_poly (new ModulusPoly(field_, d_temp));
    d_poly = d_poly->multiply(m_poly);
    d_poly = d_poly->multiply(B);
    C = C->subtract(d_poly);

    if(2*L<=n+e_size-1 && d!=0)
    {
        // b = d^-1
        ArrayRef<int> b_temp(1);
        b_temp[0] = field_.inverse(d); 
        b_poly = new ModulusPoly(field_, b_temp);

        L = n-L-e_size;
        // B = B*b = B*d^-1
        B = T->multiply(b_poly);
    }
    else
    {
        // B = B*x
        B = B->multiply(m_poly);
    }

相关问题 更多 >