擅长:python、mysql、java
<p>仅计算一次L2损耗:</p>
<pre><code>tv = tf.trainable_variables()
regularization_cost = tf.reduce_sum([ tf.nn.l2_loss(v) for v in tv ])
cost = tf.reduce_sum(tf.pow(pred - y, 2)) + regularization_cost
optimizer = tf.train.AdamOptimizer(learning_rate = 0.01).minimize(cost)
</code></pre>
<p>您可能需要删除<code>bias</code>的变量,因为这些变量不应该正则化。在</p>