<p>您可以对您调用的<code>dynamic_rnn</code>的输出执行以下操作,以便计算两个softmax和相应的损耗:</p>
<pre><code>with tf.variable_scope("softmax_0"):
# Transform you RNN output to the right output size = 10
W = tf.get_variable("kernel_0", [output[0].get_shape()[1], 10])
logits_0 = tf.matmul(inputs, W)
# Apply the softmax function to the logits (of size 10)
output_0 = tf.nn.softmax(logits_0, name = "softmax_0")
# Compute the loss (as you did in your question) with softmax_cross_entropy_with_logits directly applied on logits
loss_0 = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits_0, labels=labels[0]))
with tf.variable_scope("softmax_1"):
# Transform you RNN output to the right output size = 20
W = tf.get_variable("kernel_1", [output[0].get_shape()[1], 20])
logits_1 = tf.matmul(inputs, W)
# Apply the softmax function to the logits (of size 20)
output_1 = tf.nn.softmax(logits_1, name = "softmax_1")
# Compute the loss (as you did in your question) with softmax_cross_entropy_with_logits directly applied on logits
loss_1 = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits_1, labels=labels[1]))
</code></pre>
<p>如果与您的申请相关,您可以将这两种损失合并:</p>
^{pr2}$
<p><strong>编辑</strong>
要在评论中回答您的问题,您需要如何处理这两个softmax输出:您可以大致执行以下操作:</p>
<pre><code>with tf.variable_scope("second_part"):
W1 = tf.get_variable("W_1", [output_1.get_shape()[1], n])
W2 = tf.get_variable("W_2", [output_2.get_shape()[1], n])
prediction = tf.matmul(output_1, W1) + tf.matmul(output_2, W2)
with tf.variable_scope("optimization_part"):
loss = tf.reduce_mean(tf.squared_difference(prediction, label))
</code></pre>
<p>您只需要定义<code>n</code>,W1和W2的列数。在</p>