如何确保每次模型拟合时都随机初始化Keras模型权重

2024-09-27 21:34:11 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在为我的数据训练一个Keras模型。我必须将数据分为3部分,我为每个部分调用相同的keras模型,并尝试连续拟合和预测

我怀疑每次我调用模型时,模型权重在上次训练达到收敛后保持不变。下一个称为的模型开始最小化以前状态的误差。我希望每次训练模型时,它开始拟合来自不同随机权重初始化的数据。因为我的3个分割都是同一数据集的子集,我不希望在训练时看到分割的数据而导致任何数据泄漏到模型中

我可以知道它是否在每次模型适合时重新初始化权重。如果不是,我该怎么做

下面是我的代码的样子



model = Sequential()
model.add(Dense(512, input_dim=77, kernel_initializer='RandomNormal', activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(256, kernel_initializer='RandomNormal', activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(512, kernel_initializer='RandomNormal', activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(256, kernel_initializer='RandomNormal', activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(512, kernel_initializer='RandomNormal', activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(256, kernel_initializer='RandomNormal', activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(1))


# Compile model
model.compile(loss='mean_absolute_error', optimizer='adam')


model()
# evaluate model
history = model.fit(scaler.transform(X_train_high), y_train_high,
                    batch_size=128,
                    epochs=5)
results = model.evaluate(scaler.transform(X_train_high), y_train_high, batch_size=128)
print('High test loss, test acc:', results)

# evaluate model
history = model.fit(scaler.transform(X_train_medium), y_train_medium,
                    batch_size=128,
                    epochs=5)
results = model.evaluate(scaler.transform(X_train_medium), y_train_medium, batch_size=128)
print(' Medium test loss, test acc:', results)

# evaluate model
history = model.fit(scaler.transform(X_train_low), y_train_low,
                    batch_size=128,
                    epochs=5)
results = model.evaluate(scaler.transform(X_train_low), y_train_low, batch_size=128, epochs=5)
print('Low test loss, test acc:', results)

Tags: 数据模型addmodeltransformtrainactivationkernel
1条回答
网友
1楼 · 发布于 2024-09-27 21:34:11

模型将保持其权重,直到您重新定义一个

def define_model():
    model = Sequential()
    model.add(Dense(512, input_dim=77, kernel_initializer='RandomNormal', activation='relu'))
    model.add(Dropout(0.2))
    model.add(Dense(256, kernel_initializer='RandomNormal', activation='relu'))
    model.add(Dropout(0.2))
    model.add(Dense(512, kernel_initializer='RandomNormal', activation='relu'))
    model.add(Dropout(0.2))
    model.add(Dense(256, kernel_initializer='RandomNormal', activation='relu'))
    model.add(Dropout(0.2))
    model.add(Dense(512, kernel_initializer='RandomNormal', activation='relu'))
    model.add(Dropout(0.2))
    model.add(Dense(256, kernel_initializer='RandomNormal', activation='relu'))
    model.add(Dropout(0.2))
    model.add(Dense(1))

model=define_model()
# Compile model
model.compile(loss='mean_absolute_error', optimizer='adam')


# evaluate model
history = model.fit(scaler.transform(X_train_high), y_train_high,
                    batch_size=128,
                    epochs=5)
results = model.evaluate(scaler.transform(X_train_high), y_train_high, batch_size=128)
print('High test loss, test acc:', results)

model=define_model()

model.compile(loss='mean_absolute_error', optimizer='adam')
# evaluate model
history = model.fit(scaler.transform(X_train_medium), y_train_medium,
                    batch_size=128,
                    epochs=5)
results = model.evaluate(scaler.transform(X_train_medium), y_train_medium, batch_size=128)
print(' Medium test loss, test acc:', results)

您可以通过model.get_weights进行检查

相关问题 更多 >

    热门问题