我用代码here为一个神经网络编写了下面的反向传播例程。我所面临的问题让我困惑,并将我的调试技能发挥到了极限。在
我面临的问题很简单:当神经网络训练时,它的权值被训练到零,而精度却没有提高。在
我曾多次尝试修复它,以验证:
一些信息:
我不知道从这里到哪里去。我已经确认了所有我知道要检查的东西都运行正常,但它仍然不起作用,所以我在这里询问。以下是我用于反向传播的代码:
def backprop(train_set, wts, bias, eta):
learning_coef = eta / len(train_set[0])
for next_set in train_set:
# These record the sum of the cost gradients in the batch
sum_del_w = [np.zeros(w.shape) for w in wts]
sum_del_b = [np.zeros(b.shape) for b in bias]
for test, sol in next_set:
del_w = [np.zeros(wt.shape) for wt in wts]
del_b = [np.zeros(bt.shape) for bt in bias]
# These two helper functions take training set data and make them useful
next_input = conv_to_col(test)
outp = create_tgt_vec(sol)
# Feedforward step
pre_sig = []; post_sig = []
for w, b in zip(wts, bias):
next_input = np.dot(w, next_input) + b
pre_sig.append(next_input)
post_sig.append(sigmoid(next_input))
next_input = sigmoid(next_input)
# Backpropagation gradient
delta = cost_deriv(post_sig[-1], outp) * sigmoid_deriv(pre_sig[-1])
del_b[-1] = delta
del_w[-1] = np.dot(delta, post_sig[-2].transpose())
for i in range(2, len(wts)):
pre_sig_vec = pre_sig[-i]
sig_deriv = sigmoid_deriv(pre_sig_vec)
delta = np.dot(wts[-i+1].transpose(), delta) * sig_deriv
del_b[-i] = delta
del_w[-i] = np.dot(delta, post_sig[-i-1].transpose())
sum_del_w = [dw + sdw for dw, sdw in zip(del_w, sum_del_w)]
sum_del_b = [db + sdb for db, sdb in zip(del_b, sum_del_b)]
# Modify weights based on current batch
wts = [wt - learning_coef * dw for wt, dw in zip(wts, sum_del_w)]
bias = [bt - learning_coef * db for bt, db in zip(bias, sum_del_b)]
return wts, bias
根据Shep的建议,我检查了在训练形状为[2, 1, 1]
的网络时发生了什么,实际上,在这种情况下,网络训练是正确的。在这一点上,我最好的猜测是,梯度对0的调整太强,而对1的调整太弱,导致净减少,尽管每一步都会增加——但我不确定。在
我想你的问题是初始权重的选择和权重初始化算法的选择。Jeff Heaton作者Encog声称它通常比其他初始化方法执行得差。Here是权重初始化算法性能的另一个结果。根据我自己的经验,建议你用不同的符号值初始化你的权重。即使在我所有的输出都是正的情况下,不同符号的权重比相同符号的权重表现得更好。在
相关问题 更多 >
编程相关推荐