<p>获取<code>Graph disconnected</code>错误的原因是您没有指定<code>Input</code>层。但这不是这里的主要问题。有时,使用<code>Sequential</code>和<code>Functional</code>API从<code>keras</code>模型中删除中间层并不简单</p>
<p>对于顺序,它应该相对容易,而在函数模型中,您需要关注多输入块(例如<code>multiply</code>、<code>add</code>等)。例如:如果要删除顺序模型中的某些中间层,可以轻松地调整<a href="https://stackoverflow.com/a/49492256/9215780">this solution</a>。但是对于函数模型(<code>efficientnet</code>),由于<strong>多输入内部块</strong>的原因,您不能这样做,您将遇到以下错误:<code>ValueError: A merged layer should be called on a list of inputs</code>。所以这需要更多的工作,这里有一个<a href="https://stackoverflow.com/a/54517478/9215780">possible approach</a>来克服它</p>
<hr/>
<p>这里我将为您的案例展示一个简单的解决方法,但它可能不是通用的,在某些情况下也不安全。以<a href="https://stackoverflow.com/q/41668813/9215780">this approach</a>为基础;使用<code>pop</code>方法<a href="https://github.com/tensorflow/tensorflow/issues/22479#issuecomment-472437833" rel="nofollow noreferrer">Why it can be unsafe to use!</a>。好的,让我们先加载模型</p>
<pre><code>func_model = tf.keras.applications.EfficientNetB0()
for i, l in enumerate(func_model.layers):
print(l.name, l.output_shape)
if i == 8: break
input_19 [(None, 224, 224, 3)]
rescaling_13 (None, 224, 224, 3)
normalization_13 (None, 224, 224, 3)
stem_conv_pad (None, 225, 225, 3)
stem_conv (None, 112, 112, 32)
stem_bn (None, 112, 112, 32)
stem_activation (None, 112, 112, 32)
block1a_dwconv (None, 112, 112, 32)
block1a_bn (None, 112, 112, 32)
</code></pre>
<p>接下来,使用<code>.pop</code>方法:</p>
<pre><code>func_model._layers.pop(1) # remove rescaling
func_model._layers.pop(1) # remove normalization
for i, l in enumerate(func_model.layers):
print(l.name, l.output_shape)
if i == 8: break
input_22 [(None, 224, 224, 3)]
stem_conv_pad (None, 225, 225, 3)
stem_conv (None, 112, 112, 32)
stem_bn (None, 112, 112, 32)
stem_activation (None, 112, 112, 32)
block1a_dwconv (None, 112, 112, 32)
block1a_bn (None, 112, 112, 32)
block1a_activation (None, 112, 112, 32)
block1a_se_squeeze (None, 32)
</code></pre>