为Keras定制丢失功能

2024-09-30 01:37:30 发布

您现在位置:Python中文网/ 问答频道 /正文

我将为焦点损失实现自定义Keras loss函数,这在paper中解释过;但是,我必须得到可训练层的权重。那么,有没有办法在训练过程中读取损失函数中的层的重量?

def focal_coef(model, y_true, y_pred, content, label_remap, gamma_focal=2, w_d=1e-4):
    w = np.empty(len(content))
    f = np.empty(len(content))
    for key in content:
        e = 0.001
        f_c = content[key]
        f[label_remap[key]] = f_c
        w[label_remap[key]] = 1 / (f_c + e)
    median_freq = np.median(f)
    print("\nFrequencies of classes:\n", f)
    print("\nMedian freq:\n", median_freq)
    print("\nWeights for loss function (1/frec(c)):\n", w)
    w = median_freq * w
    print("\nWeights for loss function (median frec/frec(c)):\n", w)
    w_mask = w.astype(np.bool).astype(np.float32)
    print("w_mask", w_mask)
    softmax = k.softmax(y_pred)
    softmax_mat = k.reshape(softmax, (-1, len(content)))
    zerohot_softmax_mat = 1 - softmax_mat
    # make the labels one-hot for the cross-entropy
    print("datatype", y_true.type)
    onehot_mat = k.reshape(k.one_hot(y_true, len(content)),
                       (-1, len(content)))
    # make the zero hot to punish the false negatives, but ignore the
    # zero-weight classes
    masked_sum = k.reduce_sum(onehot_mat * w_mask, axis=1)
    zeros = onehot_mat * 0.0
    zerohot_mat = k.where(k.less(masked_sum, 1e-5),
                      x=zeros,
                      y=1 - onehot_mat)
    # focal loss p and gamma
    loss_epsilon = 1e-10
    gamma = np.full(onehot_mat.get_shape().as_list(), fill_value=gamma_focal)
    gamma_tf = k.constant(gamma, dtype=k.float32)
    focal_softmax = k.pow(1 - softmax_mat, gamma_tf) * \
                k.log(softmax_mat + loss_epsilon)
    zerohot_focal_softmax = k.pow(1 - zerohot_softmax_mat, gamma_tf) * \
                        k.log(zerohot_softmax_mat + loss_epsilon)

    # calculate xentropy
    cross_entropy = - k.reduce_sum(k.multiply(focal_softmax * onehot_mat +
                                          zerohot_focal_softmax * zerohot_mat, w),
                               axis=[1])
    loss = k.reduce_mean(cross_entropy, name='xentropy_mean')
    for layer in model.layers:
        if layer.trainable == True:
            loss += w_d * (k.sum(layer ** 2) / 2)
    return loss

def focal_loss(model, content, label_remap, gamma_focal=2, w_d=1e-4):
    def focal(y_true, y_pred):
        return -focal_coef(model, y_true, y_pred, content, label_remap, gamma_focal, w_d)

    return focal

Tags: trueforlennpcontentlabelprintgamma
1条回答
网友
1楼 · 发布于 2024-09-30 01:37:30

您的自定义损失函数必须有两个参数,即y\u true和y\u pred,如果您需要更多参数,可以调用损失函数内部的函数来获取参数,因为您需要来自层的权重,因此自定义损失函数必须有权访问模型

model = ...

def custom_loss(y_true, y_pred):
    w1 = model.get_layers[0].get_weights()[0] # weights of the first layer
    b1 = model.get_layers[0].get_weights()[1] # bias of the first layer
    w2 = model.get_layers[1].get_weights()[0] # weights of the second layer
    .
    .
    .
    loss = ...
    return loss
model.compile(loss=custom_loss, optimizer = ...)
model.fit(...)

{cd1>所有的权重都可以

相关问题 更多 >

    热门问题