我想修改存储CrossEntropyLoss()梯度的张量,即p(I)-T(I)。它存储在哪里?如何访问它?你知道吗
代码:
input = torch.randn(3, 5, requires_grad=True)
input.register_hook(lambda x: print(" \n input hook: ",x))
print(input)
target = torch.empty(3, dtype=torch.long).random_(5)
print(target)
criterion = nn.CrossEntropyLoss()
criterion.requires_grad = True
loss0 = criterion(input,target)
loss0.register_hook(lambda x: print(" \n loss0 hook: ",x))
print("before backward loss0.grad :",loss0.grad)
print("loss0 :",loss0)
loss0.backward()
print("after backward loss0.grad :",loss0.grad)
输出:
tensor([[-0.6149, -0.8179, 0.6084, -0.2837, -0.5316],
[ 1.7246, 0.5348, 1.3646, -0.7148, -0.3421],
[-0.3478, -0.6732, -0.7610, -1.0381, -0.5570]], requires_grad=True)
tensor([4, 1, 0])
before backward loss0.grad : None
loss0 : tensor(1.7500, grad_fn=<NllLossBackward>)
loss0 hook: tensor(1.)
input hook: tensor([[ 0.0433, 0.0354, 0.1472, 0.0603, -0.2862],
[ 0.1504, -0.2876, 0.1050, 0.0131, 0.0190],
[-0.2432, 0.0651, 0.0597, 0.0452, 0.0732]])
after backward loss0.grad : None
如果在注释中给出了您想要的关于输入(模型的输出)的梯度的说明,那么在代码中您将看到不存在的损失梯度。所以你可以这样:
这里
loss.grad
给出None
,但是input.grad
返回:这应该是你感兴趣的梯度。你知道吗
相关问题 更多 >
编程相关推荐