TensorFlow Googlenet初始结果不佳

2024-10-04 03:26:32 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在尝试实现一个版本的Googlenet初始神经网络,但是我用MNIST data set得到了10%的准确率。这是令人担忧的,因为对于简单的神经网络,这个数据集的准确率应该是97+%。所以我确信我没有正确地实现初始神经网络。我在下面包含了我的代码。在

The inception neural network that I am following

from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
import tensorflow as tf

x  = tf.placeholder(dtype = tf.float32, shape = [None,784])
y_ = tf.placeholder(dtype = tf.float32, shape = [None,10])

x_input = tf.reshape(x,[-1,28,28,1])


# 1x1 Convolution
W1x1 = tf.Variable(tf.random_normal([1,1,1,1]))
b1x1 = tf.Variable(tf.random_normal([1]))
output1x1 = tf.add(tf.nn.conv2d(x_input,W1x1, strides = [1,1,1,1], padding = 'SAME'),b1x1)
output1x1 = tf.nn.relu(output1x1)


# 5x5 Convolution
W5x5 = tf.Variable(tf.random_normal([1,1,1,1]))
b5x5 = tf.Variable(tf.random_normal([1]))
output5x5 = tf.add(tf.nn.conv2d(output1x1,W5x5, strides = [1,1,1,1], padding = 'SAME'),b5x5)
output5x5 = tf.nn.relu(output5x5)


# 3x3 Convolution
W3x3 = tf.Variable(tf.random_normal([1,1,1,1]))
b3x3 = tf.Variable(tf.random_normal([1]))
output3x3 = tf.add(tf.nn.conv2d(output1x1,W3x3, strides = [1,1,1,1], padding = 'SAME'),b3x3)
output3x3 = tf.nn.relu(output3x3)


# AveragePooling followed by 1x1 convolution
outputPool = tf.nn.avg_pool(output1x1, ksize = [1,2,2,1], strides = [1,1,1,1], padding = "SAME")
Wo1x1 = tf.Variable(tf.random_normal([1,1,1,1]))
bo1x1 = tf.Variable(tf.random_normal([1]))
outputo1x1 = tf.add(tf.nn.conv2d(outputPool,Wo1x1, strides = [1,1,1,1], padding = 'SAME'),bo1x1)
outputo1x1 = tf.nn.relu(outputo1x1)


# Concatonate the 4 convolution products
finalouput = tf.concat([output1x1, output5x5, output3x3, outputo1x1], 3)
finalouput = tf.reshape(finalouput, [-1, 7*7*64])

#Add a fully connected layer
W_fc = tf.Variable(tf.random_normal([7*7*64,1024]))
b_fc = tf.Variable(tf.random_normal([1024]))  
output_fc = tf.add(tf.matmul(finalouput,W_fc), b_fc )
output_fc = tf.nn.relu(output_fc)
output_fc = tf.nn.dropout(output_fc, keep_prob = 0.85)

#Final layer
W_final = tf.Variable(tf.random_normal([1024,10]))
b_final = tf.Variable(tf.random_normal([10]))
predictions = tf.add(tf.matmul(output_fc,W_final), b_final)


# Train the model
cost = tf.reduce_sum(tf.nn.softmax_cross_entropy_with_logits(labels = y_  ,logits = predictions))
optimiser = tf.train.AdamOptimizer(1e-3).minimize(cost)
correct_prediction = tf.equal(tf.argmax(predictions, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for i in range(20000):
        batch = mnist.train.next_batch(50)
        if i % 100 == 0:
            train_accuracy = accuracy.eval(feed_dict={x: batch[0], y_: batch[1]})
            print('step %d, training accuracy %g' % (i, train_accuracy))
        optimiser.run(feed_dict={x: batch[0], y_: batch[1]})
    print('test accuracy %g' % accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels,}))

Tags: addoutputdatatfbatchrandomnnvariable
3条回答

问题在于权重初始化。使用tf.random_normal()初始化的权重有一个标准偏差1,这是一个很高的标准偏差,减少这个数字应该可以解决这个问题。在

将权重初始化更改为:

W** = tf.Variable(tf.random_normal(..., stddev=0.01))
b** = tf.Variable(tf.random_normal(..., stddev=0.001))

你的模特很有影子。GoogLeNet有22层。在

我不建议自己实现层,因为它很容易出错。最好使用tensorflow层抽象。您可能还希望查看或使用现有的实现,例如here。在

也许试着用不同的顺序连接它?在

相关问题 更多 >