<p>您可以使用:<code>model.layers[index].output</code>轻松获得任何层的输出</p>
<p>对于所有层,请使用:</p>
<pre><code>from keras import backend as K
inp = model.input # input placeholder
outputs = [layer.output for layer in model.layers] # all layer outputs
functors = [K.function([inp, K.learning_phase()], [out]) for out in outputs] # evaluation functions
# Testing
test = np.random.random(input_shape)[np.newaxis,...]
layer_outs = [func([test, 1.]) for func in functors]
print layer_outs
</code></pre>
<p>注:若要模拟退出,请在<code>layer_outs</code>中使用<code>learning_phase</code>作为<code>1.</code>,否则使用<code>0.</code></p>
<p><strong>编辑:</strong>(基于评论)</p>
<p><code>K.function</code>创建no/tensorflow张量函数,该函数稍后用于从给定输入的符号图中获取输出。</p>
<p>现在<code>K.learning_phase()</code>是必需的输入,因为许多像Dropout/Batchnomalization这样的Keras层依赖它来改变训练和测试期间的行为。</p>
<p>因此,如果删除代码中的退出层,您可以简单地使用:</p>
<pre><code>from keras import backend as K
inp = model.input # input placeholder
outputs = [layer.output for layer in model.layers] # all layer outputs
functors = [K.function([inp], [out]) for out in outputs] # evaluation functions
# Testing
test = np.random.random(input_shape)[np.newaxis,...]
layer_outs = [func([test]) for func in functors]
print layer_outs
</code></pre>
<p><strong>编辑2:更优化</strong></p>
<p>我刚刚意识到,前面的答案并不是对每个函数的计算都进行优化,数据将被传输到CPU->;GPU内存中,而且张量计算需要在n-over上为较低层进行。</p>
<p>相反,这是一种更好的方法,因为您不需要多个函数,而只需要一个函数即可列出所有输出:</p>
<pre><code>from keras import backend as K
inp = model.input # input placeholder
outputs = [layer.output for layer in model.layers] # all layer outputs
functor = K.function([inp, K.learning_phase()], outputs ) # evaluation function
# Testing
test = np.random.random(input_shape)[np.newaxis,...]
layer_outs = functor([test, 1.])
print layer_outs
</code></pre>