Kerasvis对于显著性图表,我们应该使用softmax还是线性激活

2024-06-24 11:53:20 发布

您现在位置:Python中文网/ 问答频道 /正文

我们可以从使用代码here在MNIST数据集上训练一个简单的模型开始

使用keras-vis包,如果我们可以重新加载示例中训练的MNIST模型,并使用从0到10的过滤索引绘制一个数字(例如“7”)的所有显著性图,我们可以看到使用线性激活观察对比度是非常困难的。你知道吗

然而,使用softmax激活确实可以看到这种对比。此外,我们需要手动设置一个一致的比例来查看对比度。你知道吗

至少对于缺陷检测主题来说,通过检查显著性可视化中的显著对比度,了解如何以及为什么将图像分类为一个事物而不是另一个事物可能很重要。你知道吗

from __future__ import print_function
from keras.datasets import mnist
import numpy as np
from matplotlib import pyplot as plt
from vis.visualization import visualize_saliency
from vis.utils import utils
from keras import activations
from keras.models import load_model

(x_train, y_train), (x_test, y_test) = mnist.load_data()

# Load once for a model with softmax for last dense layer, and load again for one with linear swap

MODEL_PATH = "model.h5"

model = load_model(MODEL_PATH)
raw_model = load_model(MODEL_PATH)

# check layers in the model
NAMES = []
for index, layer in enumerate(model.layers):
    NAMES.append(layer.name)
    print(index, layer.name)
print('====================================================\n\n\n')

# swap softmax
layer_idx = utils.find_layer_idx(model, 'dense_2')
model.layers[layer_idx].activation = activations.linear
model = utils.apply_modifications(model)


# prepare a sample image '7'
img = x_test[0]/255

seed = img.copy()
seed = np.expand_dims(seed, 2)
seed = np.expand_dims(seed, 0)

# use absolute scale for the 
MAX_PIXEL_softmax = 0.01
MAX_PIXEL_linear = 1


for index in range(10):
    print('----------------------------------------------')
    print('Digit: ', index)
    f, ax = plt.subplots(1, 3)

    grads_softmax = visualize_saliency(raw_model, layer_idx, filter_indices=index,
                               seed_input=seed, backprop_modifier="guided")
    print('total:', round(grads_softmax.sum()*10000), '  max:', round(grads_softmax.max(),5), '  min:', round(grads_softmax.min(),5))
    grads_softmax[0,0] = MAX_PIXEL_softmax
    ax[0].set_title('Softmax ' + str(index))
    ax[0].imshow(grads_softmax, cmap = 'jet')

    grads_linear = visualize_saliency(model, layer_idx, filter_indices=index,
                               seed_input=seed, backprop_modifier="guided")
    print('total:', round(grads_linear.sum()), '  max:', round(grads_linear.max(),5), '  min:', round(grads_linear.min(),5))
    grads_linear[0,0] = MAX_PIXEL_linear
    ax[1].set_title('Linear ' + str(index))
    ax[1].imshow(grads_linear, cmap = 'jet')

    ax[2].set_title('Raw image')
    ax[2].imshow(img)

enter image description hereenter image description hereenter image description here

在作者的网页上说

To visualize activation over final dense layer outputs, we need to switch the softmax activation out for linear since gradient of output node will depend on all the other node activations. Doing this in keras is tricky, so we provide utils.apply_modifications to modify network parameters and rebuild the graph.

If this swapping is not done, the results might be suboptimal. We will start by swapping out 'softmax' for 'linear' and compare what happens if we dont do this at the end.

但显然,使用线性激活会使对比度消失。 我的代码中有什么错误使得线性激活像这样运行吗?或者如果我们需要显示对比度,还是使用softmax激活更好?你知道吗


Tags: thefromimportlayerforindexmodelax