我正在尝试创建一个简单的神经网络,它有两个输入和两个输出,基于我创建的数学函数
我的问题是我无法计算损失
X_s是一个(600,2)数组,F是由大小为(600,2)的函数(代码未显示)生成的数组。模型如下:
class ReactorNet(nn.Module):
def __init__(self, input_size, hidden1_size, hidden2_size, num_classes):
super(ReactorNet, self).__init__()
self.fc1 = nn.Linear(input_size, hidden1_size)
self.relu1 = nn.ReLU()
self.fc2 = nn.Linear(hidden1_size, hidden2_size)
self.relu2 = nn.ReLU()
self.fc3 = nn.Linear(hidden2_size, num_classes)
def forward(self, x):
out = self.fc1(x)
out = self.relu1(out)
out = self.fc2(out)
out = self.relu2(out)
out = self.fc3(out)
return out
model = ReactorNet(2, 10, 4, 2 )
print(model)
X = torch.rand((1,2))
X = X.view(-1, 2)
output = model(X)
from torch.utils.data import TensorDataset, DataLoader
import torch.optim as optim
import torch.nn.functional as func
tensor_x = torch.Tensor(X_s) # transform to torch tensor
tensor_y = torch.Tensor(F)
my_dataset = TensorDataset(tensor_x,tensor_y) # create your datset
print(my_dataset)
train_set, val_set = torch.utils.data.random_split(my_dataset, [550, 50])
learn_rate = optim.Adam(model.parameters(), lr=0.001)
epochs = 1
for i in range(epochs):
for data in train_set:
input, output = data
print(input)
print(output)
model.zero_grad()
result = model(input.view(-1,2).data)
result = result.data
result = result.squeeze(0)
print(result)
loss = func.nll_loss(output, result)
loss.backward()
learn_rate.step()
print(loss)
# Test the network
model.eval()
产生的错误发生在NLL损失线上:
Expected 2 or more dimensions (got 1)
。打印这些张量表明它们似乎具有正确的维度
tensor([0.7197, 0.0468])
tensor([0.3284, 0.2165])
tensor([-0.0252, -0.2400])
目前没有回答
相关问题 更多 >
编程相关推荐