回答此问题可获得 20 贡献值,回答如果被采纳可获得 50 分。
<p>我知道Pytork中有很多关于“计算黑森”的内容,但就我所见,我还没有发现任何适合我的东西。更精确地说,我想要的Hessian是关于网络参数的损失梯度的雅可比矩阵。也称为关于参数的二阶导数矩阵</p>
<p>我发现一些代码以直观的方式工作,虽然不应该很快。很明显,它只是计算损耗梯度的梯度,参数,每次计算一个元素(梯度)。我认为这个逻辑是绝对正确的,但我得到了一个错误,与<code>requires_grad</code>有关。我是pytorch的初学者,所以这可能是一件简单的事情,但错误似乎是它不能接受<code>env_grads</code>变量的梯度,这是前一个grad函数调用的输出</p>
<p>在此方面的任何帮助都将不胜感激。下面是错误消息后面的代码。我还打印了<code>env_grads[0]</code>变量,这样我们可以看到它实际上是一个张量,这是前一个<code>grad</code>调用的正确输出</p>
<pre><code>env_loss = loss_fn(env_outputs, env_targets)
total_loss += env_loss
env_grads = torch.autograd.grad(env_loss, params,retain_graph=True)
print( env_grads[0] )
hess_params = torch.zeros_like(env_grads[0])
for i in range(env_grads[0].size(0)):
for j in range(env_grads[0].size(1)):
hess_params[i, j] = torch.autograd.grad(env_grads[0][i][j], params, retain_graph=True)[0][i, j] # <--- error here
print( hess_params )
exit()
</code></pre>
<p>输出:</p>
<pre><code>tensor([[-6.4064e-03, -3.1738e-03, 1.7128e-02, 8.0391e-03],
[ 7.1698e-03, -2.4640e-03, -2.2769e-03, -1.0687e-03],
[-3.0390e-04, -2.4273e-03, -4.0799e-02, -1.9149e-02],
...,
[ 1.1258e-02, -2.5911e-05, -9.8133e-02, -4.6059e-02],
[ 8.1502e-04, -2.5814e-03, 4.1772e-02, 1.9606e-02],
[-1.0075e-02, 6.6072e-03, 8.3118e-04, 3.9011e-04]], device='cuda:0')
</code></pre>
<p>错误:</p>
<pre><code>Traceback (most recent call last):
File "/home/jefferythewind/anaconda3/envs/rapids3/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/jefferythewind/anaconda3/envs/rapids3/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/jefferythewind/Projects/Irina/learning-explanations-hard-to-vary/and_mask/run_synthetic.py", line 258, in <module>
main(args)
File "/home/jefferythewind/Projects/Irina/learning-explanations-hard-to-vary/and_mask/run_synthetic.py", line 245, in main
deep_mask=args.deep_mask
File "/home/jefferythewind/Projects/Irina/learning-explanations-hard-to-vary/and_mask/run_synthetic.py", line 103, in train
scale_grad_inverse_sparsity=scale_grad_inverse_sparsity
File "/home/jefferythewind/Projects/Irina/learning-explanations-hard-to-vary/and_mask/and_mask_utils.py", line 154, in get_grads_deep
hess_params[i, j] = torch.autograd.grad(env_grads[0][i][j], params, retain_graph=True)[0][i, j]
File "/home/jefferythewind/anaconda3/envs/rapids3/lib/python3.7/site-packages/torch/autograd/__init__.py", line 157, in grad
inputs, allow_unused)
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
</code></pre>