我正在尝试建立一个能够利用GPU的RNN,但是压缩的填充的序列给了我一个
RuntimeError: 'lengths' argument should be a 1D CPU int64 tensor
gpu是如何直接计算的
^{pr2}$这是代码的相关部分。在
def Tensor_length(track):
"""Finds the length of the non zero tensor"""
return int(torch.nonzero(track).shape[0] / track.shape[1])
.
.
.
def forward(self, tracks, leptons):
self.rnn.flatten_parameters()
# list of event lengths
n_tracks = torch.tensor([Tensor_length(tracks[i])
for i in range(len(tracks))])
sorted_n, indices = torch.sort(n_tracks, descending=True)
sorted_tracks = tracks[indices].to(args.device)
sorted_leptons = leptons[indices].to(args.device)
# import pdb; pdb.set_trace()
output, hidden = self.rnn(pack_padded_sequence(sorted_tracks,
lengths=sorted_n.cpu().numpy(),
batch_first=True)) # this gives the error
combined_out = torch.cat((sorted_leptons, hidden[-1]), dim=1)
out = self.fc(combined_out) # add lepton data to the matrix
out = self.softmax(out)
return out, indices # passing indices for reorganizing truth
我尝试过各种方法,从使用sorted-un到使用长张量,再到将其作为一个列表,但我总是遇到同样的错误。 我以前没有和Pythorch在gpu上合作过,任何建议都将不胜感激。在
谢谢!在
目前没有回答
相关问题 更多 >
编程相关推荐