以最小的加速度损失将平台降低到以前的重量

2024-10-01 02:25:56 发布

您现在位置:Python中文网/ 问答频道 /正文

我使用ReduceLROnPlateau作为fit回调来减少LR,我使用patiente=10,因此当LR的减少被触发时,模型可能远远不是最佳权重。在

有没有办法回到最小的acc峎损失,并重新开始训练与新的LR?在

有感觉吗?在

我可以手动使用earlysting和ModelCheckpoint('最佳.hdf5,save_best_only=True,monitor='val'u loss',mode='min')回调,但我不知道它是否有意义。在


Tags: 模型save手动fithdf5acc权重损失
2条回答

下面是一个遵循@nuric指示的有效示例:

from tensorflow.python.keras.callbacks import ReduceLROnPlateau
from tensorflow.python.platform import tf_logging as logging

class ReduceLRBacktrack(ReduceLROnPlateau):
def __init__(self, best_path, *args, **kwargs):
    super(ReduceLRBacktrack, self).__init__(*args, **kwargs)
    self.best_path = best_path

def on_epoch_end(self, epoch, logs=None):
    current = logs.get(self.monitor)
    if current is None:
        logging.warning('Reduce LR on plateau conditioned on metric `%s` '
                        'which is not available. Available metrics are: %s',
                         self.monitor, ','.join(list(logs.keys())))
    if not self.monitor_op(current, self.best): # not new best
        if not self.in_cooldown(): # and we're not in cooldown
            if self.wait+1 >= self.patience: # going to reduce lr
                # load best model so far
                print("Backtracking to best model before reducting LR")
                self.model.load_weights(self.best_path)

    super().on_epoch_end(epoch, logs) # actually reduce LR

ModelCheckpoint回调可用于更新最佳模型转储。e、 g.将以下两个回调传递给model fit:

^{pr2}$

您可以创建一个从ReduceLROnPlateau继承的自定义回调,如下所示:

class CheckpointLR(ReduceLROnPlateau):
   # override on_epoch_end()
   def on_epoch_end(self, epoch, logs=None):
     if not self.in_cooldown():
       temp = self.model.get_weights()
       self.model.set_weights(self.last_weights)
       self.last_weights = temp
     super().on_epoch_end(epoch, logs) # actually reduce LR

相关问题 更多 >