Kerasterflow预训练模型中的纯训练偏差

2024-06-24 11:46:05 发布

您现在位置:Python中文网/ 问答频道 /正文

听起来很奇怪,我知道!但是:可能的训练只是偏见吗?我已经对模型进行了预训练,但对权重应用了低秩,很明显,NN下降的准确性。。。有没有说Keras TensoFlow编译器只训练偏差?当然,我不知道这是否真的有意义(我认为这是一个愚蠢的想法…),但我想测试,如果准确性增加或没有


Tags: 编译器nnkeras意义权重偏差准确性对模型
1条回答
网友
1楼 · 发布于 2024-06-24 11:46:05

应用渐变时,可以手动选择要更新的变量,如下所示:

def get_grad(model, x, y):
    with tf.GradientTape() as tape:
        loss = compute_loss(model, x, y, training=True)
        to_update = [i for ix, i in enumerate(model.trainable_variables) if ix in (1, 3, 5, 7)]
    return loss, tape.gradient(loss, to_update)

它返回变量1、3、5、7,它们是偏差。是的,它确实有效:

Epoch 10 Loss: 1.013 Acc: 33.33%
Epoch 11 Loss: 1.006 Acc: 34.00%
Epoch 12 Loss: 0.999 Acc: 34.00%
Epoch 13 Loss: 0.993 Acc: 36.00%
Epoch 14 Loss: 0.987 Acc: 39.33%
Epoch 15 Loss: 0.982 Acc: 48.67%
Epoch 16 Loss: 0.979 Acc: 53.33%
Epoch 17 Loss: 0.975 Acc: 56.00%
Epoch 18 Loss: 0.972 Acc: 59.33%
Epoch 19 Loss: 0.969 Acc: 60.67%
Epoch 20 Loss: 0.967 Acc: 61.33%
Epoch 21 Loss: 0.966 Acc: 61.33%
Epoch 22 Loss: 0.962 Acc: 61.33%
Epoch 23 Loss: 0.961 Acc: 62.67%
Epoch 24 Loss: 0.961 Acc: 62.00%
Epoch 25 Loss: 0.959 Acc: 62.67%

完整代码:

import tensorflow as tf
from tensorflow.keras.layers import Dense
from sklearn.datasets import load_iris
import numpy as np

X, y = load_iris(return_X_y=True)
X = X.astype(np.float32)

train = tf.data.Dataset.from_tensor_slices((X, y)).shuffle(25).batch(8)

model = tf.keras.Sequential([
    Dense(16, activation='relu'),
    Dense(32, activation='relu'),
    Dense(64, activation='relu'),
    Dense(3, activation='softmax')])

loss_object = tf.losses.SparseCategoricalCrossentropy(from_logits=False)


def compute_loss(model, x, y, training):
  out = model(x, training=training)
  loss = loss_object(y_true=y, y_pred=out)
  return loss


def get_grad(model, x, y):
    with tf.GradientTape() as tape:
        loss = compute_loss(model, x, y, training=True)
        to_update = [i for ix, i in enumerate(model.trainable_variables) if ix in (1, 3, 5, 7)]
    return loss, tape.gradient(loss, to_update)


optimizer = tf.optimizers.Adam()

verbose = "Epoch {:2d} Loss: {:.3f} Acc: {:.2%}"

model.build(input_shape=([None, 4]))
weights_before = model.layers[0].get_weights()

for epoch in range(1, 25 + 1):
    train_loss = tf.metrics.Mean()
    train_acc = tf.metrics.SparseCategoricalAccuracy()

    for x, y in train:
        loss_value, grads = get_grad(model, x, y)
        to_update = [i for ix, i in enumerate(model.trainable_variables) if ix in (1, 3, 5, 7)]
        optimizer.apply_gradients(zip(grads, to_update))
        train_loss.update_state(loss_value)
        train_acc.update_state(y, model(x, training=True))

    print(verbose.format(epoch,
                         train_loss.result(),
                         train_acc.result()))

weights_after = model.layers[0].get_weights()

相关问题 更多 >