<p>正确的方法是这样做</p>
<pre class="lang-py prettyprint-override"><code>import torch, torch.nn as nn
class L1Penalty(torch.autograd.Function):
@staticmethod
def forward(ctx, input, l1weight = 0.1):
ctx.save_for_backward(input)
ctx.l1weight = l1weight
return input
@staticmethod
def backward(ctx, grad_output):
input, = ctx.saved_variables
grad_input = input.clone().sign().mul(ctx.l1weight)
grad_input+=grad_output
return grad_input
</code></pre>
<p>创建充当包装器的Lambda类</p>
<pre class="lang-py prettyprint-override"><code>class Lambda(nn.Module):
"""
Input: A Function
Returns : A Module that can be used
inside nn.Sequential
"""
def __init__(self, func):
super().__init__()
self.func = func
def forward(self, x): return self.func(x)
</code></pre>
<p>塔达</p>
<pre class="lang-py prettyprint-override"><code>model = nn.Sequential(
nn.Linear(10, 10),
nn.ReLU(),
nn.Linear(10, 6),
nn.ReLU(),
# sparsity
Lambda(L1Penalty.apply),
nn.Linear(6, 10),
nn.ReLU(),
nn.Linear(10, 10),
nn.ReLU())
a = torch.rand(50,10)
b = model(a)
print(b.shape)
</code></pre>