如何阻止Keras在googlecolab上随机训练Epoch 1

2024-10-02 14:17:37 发布

您现在位置:Python中文网/ 问答频道 /正文

目前正在尝试用网格搜索参数为一个相当简单的深度学习模型克拉斯特遣部队地址:

                                params = getTensorParams()
                                batchSize = 64*2*2
                                epochs = 15
                                learningRate = 0.01
                                noVals = np.load(pre + 'noVals.npy')
                                noBatches = math.ceil(noVals / batchSize)

                                call = tf.keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=4, verbose=2, mode='auto',
                                                                        restore_best_weights=True)

                                timeStart = time.time()
                                model = tf.keras.models.Sequential()
                                model.add(tf.keras.layers.BatchNormalization(epsilon=0.001, center=True, scale=True))
                                model.add(tf.keras.layers.Dense(layerSize1, activation=activationFunc1, activity_regularizer=tf.keras.regularizers.l1(0.001)))
                                model.add(tf.keras.layers.Dropout(dropout1))
                                model.add(tf.keras.layers.Dense(layerSize2, activation=activationFunc2, activity_regularizer=tf.keras.regularizers.l2(0.001)))
                                model.add(tf.keras.layers.Dropout(dropout2))
                                model.add(tf.keras.layers.Dense(1, activation='sigmoid'))

                                opt = tf.keras.optimizers.Adam(lr=learningRate, decay=learningRate/epochs)
                                model.compile(optimizer=opt, loss='binary_crossentropy', metrics=['accuracy'])

                                results = model.fit_generator(
                                    dataGeneratorLoad(noVals, batchSize, params['fileLength'], pre),
                                    steps_per_epoch=noBatches, verbose=2,
                                    epochs=epochs,
                                    validation_data=(validation_x, validation_y),
                                    callbacks=[
                                        call,
                                    ]
                                )

这就是完整的相关代码。此代码在本地运行时按预期执行。 但是,当使用GPU运行时在Google Colab上运行时,会产生以下输出:

Epoch 1/15

145500/1701 - 7s - loss: 1.0490 - acc: 0.6385

1701/1701 - 196s - loss: 0.2924 - acc: 0.8286 - val_loss: 0.7632 - val_acc: 0.6385

Epoch 2/15

Epoch 1/15

145500/1701 - 7s - loss: 1.0077 - acc: 0.6463

1701/1701 - 142s - loss: 0.2255 - acc: 0.8664 - val_loss: 0.8231 - val_acc: 0.6463

Epoch 3/15

Epoch 1/15

145500/1701 - 7s - loss: 1.1563 - acc: 0.6394

1701/1701 - 143s - loss: 0.1931 - acc: 0.8899 - val_loss: 0.8333 - val_acc: 0.6394

Epoch 4/15

Epoch 1/15

145500/1701 - 7s - loss: 0.9712 - acc: 0.6381

1701/1701 - 140s - loss: 0.2216 - acc: 0.8176 - val_loss: 0.8932 - val_acc: 0.6381

Epoch 5/15

Epoch 1/15

145500/1701 - 7s - loss: 1.0497 - acc: 0.6369

每个时代都是这样。只是想知道是否有人经历过或知道一个解决办法。第一个问题,所以lmk如果内容需要添加。最令人费解的是,它在本地工作得很好。似乎在别处找不到这个文件。你知道吗


Tags: addtruemodellayerstfvalkerasdense

热门问题