LSTM反投影梯度检查的问题

2024-09-30 18:28:04 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在尝试实现我自己的LSTM网络。我实现了反向传播算法,但它没有通过梯度检查。不知道错误在哪里。请帮忙

以下是问题代码:

def backward_propagation(self, x, y, cache):
# T - the length of the sequence
T = len(y)
# perform forward propagation
cache = self.forward_propagation(x)

# ...

# delta for output layer
dy = cache['y'].copy()
dy[np.arange(len(y)), y] -= 1. # softmax loss gradient
dhtmp = np.zeros((1, self.hidden_dim))
dctmp = np.zeros((1, self.hidden_dim))

for t in np.arange(T)[::-1]:
    dV += np.outer(dy[t], h[t].T)
    dhtmp = self.V.T.dot(dy[t])

    for bptt_step in np.arange(0, t+1)[::-1]:
        # add to gradients at each previous step
        do[bptt_step] = dhtmp * ct[bptt_step]
        dct[bptt_step] = dhtmp * o[bptt_step]

        dctmp += dct[bptt_step] * (1.0 - ct[bptt_step]**2)

        di[bptt_step] = dctmp * g[bptt_step]
        df[bptt_step] = dctmp * c[bptt_step-1]
        dg[bptt_step] = dctmp * i[bptt_step]

        # backprop activation functions
        diga[bptt_step] = di[bptt_step] * i[bptt_step] * (1.0 - i[bptt_step])
        dfga[bptt_step] = df[bptt_step] * f[bptt_step] * (1.0 - f[bptt_step])
        doga[bptt_step] = do[bptt_step] * o[bptt_step] * (1.0 - o[bptt_step])
        dgga[bptt_step] = dg[bptt_step] * (1.0 - g[bptt_step] ** 2)

        # backprop matrix multiply
        dWi += np.outer(diga[bptt_step], h[bptt_step-1])
        dWf += np.outer(dfga[bptt_step], h[bptt_step-1])
        dWo += np.outer(doga[bptt_step], h[bptt_step-1])
        dWg += np.outer(dgga[bptt_step], h[bptt_step-1])


        dUi[:, x[bptt_step]] += diga[bptt_step]
        dUf[:, x[bptt_step]] += dfga[bptt_step]
        dUo[:, x[bptt_step]] += doga[bptt_step]
        dUg[:, x[bptt_step]] += dgga[bptt_step]

        # update deltas for next step
        # here dh is accumulated as shared variable
        dhtmp = np.dot(self.Wi, diga[bptt_step])
        # dhtmp += np.dot(self.Wf, dfga[bptt_step]) <- is it needed to accumulate other dhtmp's?
        # dhtmp += np.dot(self.Wo, doga[bptt_step])
        # dhtmp += np.dot(self.Wg, dgga[bptt_step])
        dctmp = dctmp * f[bptt_step]

return [dV, dWi, dWf, dWo, dWg, dUi, dUf, dUo, dUg]

我想我可能会在矩阵向量乘法或改变dhtmp,dctmp时出错。在


Tags: selfforstepnpdotouterdypropagation
1条回答
网友
1楼 · 发布于 2024-09-30 18:28:04

好吧,过了一段时间我终于明白了。有一个额外的内循环。此代码工作正常:

def backward_propagation(self, x, y, cache):
    # T - the length of the sequence
    T = len(y)
    # perform forward propagation
    cache = self.forward_propagation(x)

    #...

    # delta for output layer
    dy = cache['y'].copy()
    dy[np.arange(len(y)), y] -= 1.0 # softmax loss gradient
    # print("dy: ", dy)
    dhtmp = np.zeros((1, self.hidden_dim))
    dh_prev = np.zeros((1, self.hidden_dim))
    dctmp = np.zeros((1, self.hidden_dim))

    for t in np.arange(T)[::-1]:
        dV += np.outer(dy[t], h[t].T)
        dhtmp = self.V.T.dot(dy[t]) + dh_prev

        # add to gradients at each previous step
        do[t] = dhtmp * ct[t]
        dct[t] = dhtmp * o[t]

        dctmp += dct[t] * (1.0 - ct[t]**2)

        di[t] = dctmp * g[t]
        df[t] = dctmp * c[t-1]
        dg[t] = dctmp * i[t]

        # backprop activation functions
        diga[t] = di[t] * i[t] * (1.0 - i[t])
        dfga[t] = df[t] * f[t] * (1.0 - f[t])
        doga[t] = do[t] * o[t] * (1.0 - o[t])
        dgga[t] = dg[t] * (1.0 - g[t] ** 2)

        # backprop matrix multiply
        dWi += np.outer(diga[t], h[t-1])
        dWf += np.outer(dfga[t], h[t-1])
        dWo += np.outer(doga[t], h[t-1])
        dWg += np.outer(dgga[t], h[t-1])


        dUi[:, x[t]] += diga[t]
        dUf[:, x[t]] += dfga[t]
        dUo[:, x[t]] += doga[t]
        dUg[:, x[t]] += dgga[t]

        # update deltas for next step
        # here dh is accumulated as shared variable
        dh_prev = np.dot(self.Wi.T, diga[t])
        dh_prev += np.dot(self.Wf.T, dfga[t])
        dh_prev += np.dot(self.Wo.T, doga[t])
        dh_prev += np.dot(self.Wg.T, dgga[t])
        dctmp = dctmp * f[t]

    return [dV, dWi, dWf, dWo, dWg, dUi, dUf, dUo, dUg]

希望有人能找到有用的答案。在

相关问题 更多 >