同一模型在Keras和Tens中产生一致的不同精度

2024-09-19 21:03:58 发布

您现在位置:Python中文网/ 问答频道 /正文

我尝试在Keras中实现相同的模型,在Tensorflow中使用Keras层,使用自定义数据。这两个模型在多次训练中产生了一致的不同精度(keras~71%,tensorflow~65%)。我希望tensorflow和keras一样好,这样我就可以进入tensorflow迭代来调整一些较低级别的算法。你知道吗

这是我最初的Keras代码:

from keras.layers import Dense, Dropout, Input 
from keras.models import Model, Sequential
from keras import backend as K

input_size = 2000
num_classes = 4
num_industries = 22
num_aux_inputs = 3

main_input = Input(shape=(input_size,),name='text_vectors')
x = Dense(units=64, activation='relu', name = 'dense1')(main_input)
drop1 = Dropout(0.2,name='dropout1')(x)

auxiliary_input = Input(shape=(num_aux_inputs,), name='aux_input')
x = keras.layers.concatenate([drop1,auxiliary_input])
x = Dense(units=64, activation='relu',name='dense2')(x)
drop2 = Dropout(0.1,name='dropout2')(x)

x = Dense(units=32, activation='relu',name='dense3')(drop2)

main_output = Dense(units=num_classes, 
activation='softmax',name='main_output')(x)

model = Model(inputs=[main_input, auxiliary_input], 
outputs=main_output)

model.compile(loss=keras.losses.categorical_crossentropy, metrics= ['accuracy'],optimizer=keras.optimizers.Adadelta())

history = model.fit([train_x,train_x_auxiliary], train_y, batch_size=128, epochs=20, verbose=1, validation_data=([val_x,val_x_auxiliary], val_y))
loss, accuracy = model.evaluate([val_x,val_x_auxiliary], val_y, verbose=0)

我将keras层移动到tensorflow,如下this article

import tensorflow as tf
from keras import backend as K
import keras
from keras.layers import Dense, Dropout, Input # Dense layers are "fully connected" layers
from keras.metrics import categorical_accuracy as accuracy
from keras.objectives import categorical_crossentropy


tf.reset_default_graph()

sess = tf.Session()
K.set_session(sess)

input_size = 2000
num_classes = 4
num_industries = 22
num_aux_inputs = 3

x = tf.placeholder(tf.float32, shape=[None, input_size], name='X')
x_aux = tf.placeholder(tf.float32, shape=[None, num_aux_inputs], name='X_aux')
y = tf.placeholder(tf.float32, shape=[None, num_classes], name='Y')

# build graph
layer = Dense(units=64, activation='relu', name = 'dense1')(x)
drop1 = Dropout(0.2,name='dropout1')(layer)
layer = keras.layers.concatenate([drop1,x_aux])
layer = Dense(units=64, activation='relu',name='dense2')(layer)
drop2 = Dropout(0.1,name='dropout2')(layer)
layer = Dense(units=32, activation='relu',name='dense3')(drop2)
output_logits = Dense(units=num_classes, activation='softmax',name='main_output')(layer)

loss = tf.reduce_mean(categorical_crossentropy(y, output_logits))
acc_value = tf.reduce_mean(accuracy(y, output_logits))

correct_prediction = tf.equal(tf.argmax(output_logits, 1), tf.argmax(y, 1), name='correct_pred')

optimizer = tf.train.AdadeltaOptimizer(learning_rate=1.0, rho=0.95,epsilon=tf.keras.backend.epsilon()).minimize(loss)

init = tf.global_variables_initializer()

sess.run(init)

epochs = 20             # Total number of training epochs
batch_size = 128        # Training batch size
display_freq = 300      # Frequency of displaying the training results
num_tr_iter = int(len(y_train) / batch_size)

with sess.as_default():

    for epoch in range(epochs):
        print('Training epoch: {}'.format(epoch + 1))
        # Randomly shuffle the training data at the beginning of each epoch 
        x_train, x_train_aux, y_train = randomize(x_train, x_train_auxiliary, y_train)

        for iteration in range(num_tr_iter):
            start = iteration * batch_size
            end = (iteration + 1) * batch_size
            x_batch, x_aux_batch, y_batch = get_next_batch(x_train, x_train_aux, y_train, start, end)

            # Run optimization op (backprop)
            feed_dict_batch = {x: x_batch, x_aux:x_aux_batch, y: y_batch,K.learning_phase(): 1}

            optimizer.run(feed_dict=feed_dict_batch)

我也在tensorflow中从头开始实现了整个模型,但是它的精确度也高达65%,所以我决定尝试在TF设置中使用Keras层来识别问题。你知道吗

我查阅了关于Keras和Tensorflow类似问题的帖子,并尝试了以下对我没有帮助的方法:

  1. Keras的dropout层只在训练阶段有效,所以我在tf代码中设置了keras.backend.learning_phase()

  2. Keras和Tensorflow有不同的变量初始化。我试过用tensorflow以下3种方式初始化我的权重,这应该和Keras的权重初始化一样,但它们也不会影响精确度:

    initer = tf.glorot_uniform_initializer() 
    initer = tf.contrib.layers.xavier_initializer() 
    initer = tf.random_normal(shape) * (np.sqrt(2.0/(shape[0] + shape[1])))
    
  3. 两个版本中的优化器设置完全相同!虽然看起来精度并不取决于优化器,但我尝试在keras和tf中使用不同的优化器,并且精度都收敛到相同的水平。

救命啊!你知道吗


Tags: nameimportinputsizetfbatchtrainactivation
2条回答

我检查了初始化,种子,参数和超参数,但准确性是不同的。你知道吗

我检查了Keras的代码,他们随机地洗牌一批图像,然后将其送入网络,所以这种洗牌在不同的引擎中是不同的。因此,我们需要找到一种方法,将同一组批处理图像送入网络,以获得相同的精度

在我看来,这很可能是重量初始化问题。我建议你做的是初始化keras层,在训练之前获取层权重并用这些值初始化tf层。你知道吗

我遇到过这样的问题,它为我解决了问题,但那是很久以前的事了,我不知道他们是否把那些初始值设定为相同的。当时tf和keras的初始化明显不同。你知道吗

相关问题 更多 >