Keras中基于自定义度量的提前停车和学习率调度

2024-05-17 03:19:51 发布

您现在位置:Python中文网/ 问答频道 /正文

我在Keras中有一个对象检测模型,希望根据验证集上计算的平均精度(mAP)来监控我的训练。在

我已经将tensorflow-models中的代码移植到使用提供的模型和数据运行评估的脚本中。但它不是作为Keras度量来实现的,而是作为一个独立类来实现的:

evaluation = SSDEvaluation(model, data, data_size)
mAP = evaluation.evaluate()

我完全接受这样的生活。事实上,我不希望在培训批次中计算它,因为它会减慢培训的速度。在

我的问题是:如何根据在每个epoch之后计算的度量重用ReduceLROnPlateau和{}回调?在


Tags: 数据对象代码模型脚本mapdata度量
2条回答

您可以使用更新^{cd1>}对象的LambdaCallback来完成此操作:

假设您的evaluation.evaluate()返回一个类似{'val/mAP': value}的字典,您可以这样做:

eval_callback = LambdaCallback(
     on_epoch_end=lambda epoch, logs: logs.update(evaluation.evaluate())
) 

这里的技巧是logs将进一步传递给其他回调,因此它们可以直接访问该值:

^{pr2}$

它将自动出现在CSVLogger和任何其他回调中。但是请注意,eval_callback必须在使用回调列表中的值的任何回调之前:

callbacks = [eval_callback, early_stopping]

我不确定什么是SSDEvaluation,但是如果没有开销的任何平均预精度计算是可以接受的,我建议使用keras callbacks的以下方法。在

您希望oto使用两个callbacl-EarlyStoppingReduceLROnPlateau-它们都作用于epoch结束并监视loss或{}值。它们从methodlogs参数获得此值

 def on_epoch_end(self, epoch, logs=None):
     """Called at the end of an epoch.
     ...
     """

-将实际的映射发送到日志值,我们强制此方法和从日志中获取精度值的所有回调使用它。Callbcaks从这里选择值(thisineinthecode-early stopping,thisone表示Reduce LR)。
所以,我们应该为两个回调“伪造”日志。我想这不是理想的,但有效的解决方案。在

这些类继承回调并计算map值,同时避免通过共享对象Hub重新计算map。在

^{pr2}$

-它只是共享映射值的枢纽。可能会引起一些副作用。你可以尽量避免使用它。在

def on_epoch_end(self, epoch, logs):
    """self just a callbcak instance"""
    if self.last_metric_for_epoch == epoch:
        map_ = self.hub.map_value
    else:
        prediction = self.model.predict(self._data, verbose=1)
        map_ = average_precision_score(self._target, prediction)
        self.hub.map_value = map_
        self.last_metric_for_epoch = epoch

-此函数计算并共享地图

class EarlyStoppingByMAP(EarlyStopping):
    def __init__(self, data, target, hub, *args, **kwargs):
        """
        data, target - values and target for the map calculation
        hub - shared object to store _map_ value 
        *args, **kwargs for the super __init__
        """
        # I've set monitor to 'acc' here, because you're interested in metric, not loss
        super(EarlyStoppingByMAP, self).__init__(monitor='acc', *args, **kwargs)
        self._target = target
        self._data = data 
        self.last_metric_for_epoch = -1
        self.hub = hub

    def on_epoch_end(self, epoch, logs):
        """
        epoch is the number of epoch, logs is a dict logs with 'loss' value 
        and metric 'acc' values
        """
        on_epoch_end(self, epoch, logs)      
        logs['acc'] = self.hub.map_value  # "fake" metric with calculated value
        print('Go callback from the {}, logs: \n{}'.format(EarlyStoppingByMAP.__name__, logs))
        super(EarlyStoppingByMAP, self).on_epoch_end(epoch, logs)  # works as a callback fn


class ReduceLROnPlateauByMAP(ReduceLROnPlateau):
    def __init__(self, data, target, hub, *args, **kwargs):
        # the same as in previous
        # I've set monitor to 'acc' here, because you're interested in metric, not loss
        super(ReduceLROnPlateauByMAP, self).__init__(monitor='acc', *args, **kwargs)
        self._target = target
        self._data = data 
        self.last_metric_for_epoch = -1
        self.hub = hub


    def on_epoch_end(self, epoch, logs):
        on_epoch_end(self, epoch, logs)
        logs['acc'] = self.hub.map_value   # "fake" metric with calculated value
        print('Go callback from the {}, logs: \n{}'.format(ReduceLROnPlateau.__name__, logs))
        super(ReduceLROnPlateauByMAP, self).on_epoch_end(epoch, logs)  # works as a callback fn

-NB不要在构造函数中使用monitor参数!您应该使用'acc',参数已设置为正确的值。在

一些测试:

from keras.datasets import mnist
from keras.models import Model
from keras.layers import Dense, Input
import numpy as np

(X_tr, y_tr), (X_te, y_te) = mnist.load_data()
X_tr = (X_tr / 255.).reshape((60000, 784))
X_te = (X_te / 255.).reshape((10000, 784))


def binarize_labels(y):
    y_bin = np.zeros((len(y), len(np.unique(y)))) 
    y_bin[range(len(y)), y] = 1
    return y_bin

y_train_bin, y_test_bin = binarize_labels(y_tr), binarize_labels(y_te)


inp = Input(shape=(784,))
x = Dense(784, activation='relu')(inp)
x = Dense(256, activation='relu')(x)
out = Dense(10, activation='softmax')(x)

model = Model(inp, out)
model.compile(loss='categorical_crossentropy', optimizer='adam')

-一个简单的“测试套件”。现在去试穿:

hub = MAPHub()  # instentiate a hub
# I will use default params except patience as example, set it to 1 and 5
early_stop = EarlyStoppingByMAP(X_te, y_test_bin, hub, patience=1)  # Patience is EarlyStopping's param
reduce_lt = ReduceLROnPlateauByMAP(X_te, y_test_bin, hub, patience=5)  # Patience is ReduceLR's param

history = model.fit(X_tr, y_train_bin, epochs=10, callbacks=[early_stop, reduce_lt])
Out:
Epoch 1/10
60000/60000 [==============================] - 12s 207us/step - loss: 0.1815
10000/10000 [==============================] - 1s 59us/step
Go callback from the EarlyStoppingByMAP, logs: 
{'loss': 0.18147853660446903, 'acc': 0.9934216252519924}
10000/10000 [==============================] - 0s 40us/step
Go callback from the ReduceLROnPlateau, logs: 
{'loss': 0.18147853660446903, 'acc': 0.9934216252519924}
Epoch 2/10
60000/60000 [==============================] - 12s 197us/step - loss: 0.0784
10000/10000 [==============================] - 0s 40us/step
Go callback from the EarlyStoppingByMAP, logs: 
{'loss': 0.07844233275586739, 'acc': 0.9962269038764738}
10000/10000 [==============================] - 0s 41us/step
Go callback from the ReduceLROnPlateau, logs: 
{'loss': 0.07844233275586739, 'acc': 0.9962269038764738}
Epoch 3/10
60000/60000 [==============================] - 12s 197us/step - loss: 0.0556
10000/10000 [==============================] - 0s 40us/step
Go callback from the EarlyStoppingByMAP, logs: 
{'loss': 0.05562876497630107, 'acc': 0.9972085346550085}
10000/10000 [==============================] - 0s 40us/step
Go callback from the ReduceLROnPlateau, logs: 
{'loss': 0.05562876497630107, 'acc': 0.9972085346550085}
Epoch 4/10
60000/60000 [==============================] - 12s 198us/step - loss: 0.0389
10000/10000 [==============================] - 0s 41us/step
Go callback from the EarlyStoppingByMAP, logs: 
{'loss': 0.0388911374788188, 'acc': 0.9972696414934574}
10000/10000 [==============================] - 0s 41us/step
Go callback from the ReduceLROnPlateau, logs: 
{'loss': 0.0388911374788188, 'acc': 0.9972696414934574}
Epoch 5/10
60000/60000 [==============================] - 12s 197us/step - loss: 0.0330
10000/10000 [==============================] - 0s 39us/step
Go callback from the EarlyStoppingByMAP, logs: 
{'loss': 0.03298293751536124, 'acc': 0.9959456176387349}
10000/10000 [==============================] - 0s 39us/step
Go callback from the ReduceLROnPlateau, logs: 
{'loss': 0.03298293751536124, 'acc': 0.9959456176387349}

好吧,看来至少早一点就可以了。我想,ReduceLROnPlateau到,因为它们使用相同的日志和相似的逻辑-如果设置了适当的参数。在

如果您不想使用sklearn函数,但是SSDEvaluation(我就是找不到它是什么)——tou可以很容易地采用on_epoch_method函数来处理这个求值函数。在

希望有帮助。在

相关问题 更多 >