我花了几天时间试图制作一个基本HTML模板,并学习RNN网络。 我尝试了不同的方法,甚至对以下数据进行了过度拟合:
<!DOCTYPE html>
<html>
<head>
<title>Page Title</title>
</head>
<body>
<h1>This is a Heading</h1>
<p>This is a paragraph.</p>
</body>
</html>
使用Adam优化器和crossentroploss获得100%的训练和验证准确率。你知道吗
问题是,当我尝试从网络中取样时,结果完全是随机的,我不知道问题是什么:
..<a<a<a<a<aa<ttp11111b11111b11111111b11b1bbbb<btttn111
我的采样函数如下:
def sample_sentence():
words = list()
count = 0
modelOne.eval()
with torch.no_grad():
# Setup initial input state, and input word (we use "the").
previousWord = torch.LongTensor(1, 1).fill_(trainData.vocabulary['letter2id']['[START]'])
hidden = Variable(torch.zeros(6, 1, 100).to(device))
while True:
# Predict the next word based on the previous hidden state and previous word.
inputWord = torch.autograd.Variable(previousWord.to(device))
predictions, newHidden = modelOne(inputWord, hidden)
hidden = newHidden
pred = torch.nn.functional.softmax(predictions.squeeze()).data.cpu().numpy().astype('float64')
pred = pred/np.sum(pred)
nextWordId = np.random.multinomial(1, pred, 1).argmax()
if nextWordId == 0:
continue
words.append(trainData.vocabulary['id2letter'][nextWordId])
# Setup the inputs for the next round.
previousWord.fill_(nextWordId)
# Keep adding words until the [END] token is generated.
if nextWordId == trainData.vocabulary['letter2id']['[END]']:
break
if count>20000:
break
count += 1
words.insert(0, '[START]')
return words
我的网络架构如下:
class ModelOne(Model) :
def __init__(self,
vocabulary_size,
hidden_size,
num_layers,
rnn_dropout,
embedding_size,
dropout,
num_directions):
super(Model, self).__init__()
self.vocabulary_size = vocabulary_size
self.hidden_size = hidden_size
self.num_layers = num_layers
self.rnn_dropout = rnn_dropout
self.dropout = dropout
self.num_directions = num_directions
self.embedding_size = embedding_size
self.embeddings = nn.Embedding(self.vocabulary_size, self.embedding_size)
self.rnn = nn.GRU(self.embedding_size,
self.hidden_size,
num_layers=self.num_layers,
bidirectional=True if self.num_directions==2 else False,
dropout=self.rnn_dropout,
batch_first=True)
self.linear = nn.Linear(self.hidden_size*self.num_directions, self.vocabulary_size)
def forward(self, paddedSeqs, hidden):
batchSequenceLength = paddedSeqs.size(1)
batchSize = paddedSeqs.size(0)
lengths = paddedSeqs.ne(0).sum(dim=1)
embeddingVectors = self.embeddings(paddedSeqs)
x = torch.nn.utils.rnn.pack_padded_sequence(embeddingVectors, lengths, batch_first=True)
self.rnn.flatten_parameters()
x,hid = self.rnn(x, hidden)
output, _ = torch.nn.utils.rnn.pad_packed_sequence(x, batch_first=True, padding_value=0, total_length=batchSequenceLength)
predictions = self.linear(output)
return predictions.view(batchSize, self.vocabulary_size, batchSequenceLength), hid
def init_hidden(self, paddedSeqs):
hidden = Variable(torch.zeros(self.num_layers*self.num_directions,
1,
self.hidden_size).to(device))
return hidden
modelOne =ModelOne(vocabulary_size=vocabularySize,
hidden_size=100,
embedding_size=50,
num_layers=3,
rnn_dropout=0.0,
dropout=0,
num_directions=2).to(device)
如果你有什么需要改变的想法,请告诉我。 我将所有代码添加到github存储库:https://github.com/OverclockRo/HTMLGeneration/blob/SamplingTestTemplate/Untitled.ipynb
首先,为了使GRU(RNN)高效,您可能需要更多的数据来训练。你知道吗
第二,似乎您在嵌入方面有问题。看起来,映射词汇表['id2letter']不起作用,否则您将获得 标签序列,如
<head><title><title><title>
,而不是p111
。你知道吗编辑
我已经训练了this character-level GRU network1700个时代的this page的html源代码。下面是一个2000个字符的例子,摘录了它生成的内容:
我希望,这有帮助。你知道吗
相关问题 更多 >
编程相关推荐