<p>我认为您可以删除最后一层,然后添加所需的层</p>
<p>因此,在你的情况下:</p>
<pre class="lang-py prettyprint-override"><code>class GoogleNet(nn.Module):
def __init__(self):
super(GoogleNet,self).__init__()
# load the original google net
self.model = googlenet_pytorch.GoogLeNet.from_pretrained('googlenet')
# remove the last two layers (fc and dropout)
self.model = nn.Sequential(*list(self.model.children())[:-2])
# add your dropout layer
self.dropout = nn.Dropout(0.2, inplace=False)
# add your dense layer
self.fc = nn.Linear(1024, 200, bias=False)
def forward(self, x):
batch_size, _, _, _ = x.shape
# I dont know what this does but I'll leave it here
x = self.model.extract_features(x)
# but I would add x = self.model(x) instead
# insert your layer normalization
x = F.layer_norm(x,x.size[1:],elementwise_affine=False)
# put droput layer back on
x = self.dropout(x)
x = self.fc(x)
x = F.normalize(x, p=2, dim=1)
return x
</code></pre>
<p>但请注意,打印模型摘要时,它不会显示规范化层。它不会打印您在<code>forward()</code>中使用<code>F.</code>添加的内容,只打印您在<code>__init__()</code>中创建的初始值</p>