Keras自定义层值错误:操作的渐变色为“None”。

2024-09-30 20:33:46 发布

您现在位置:Python中文网/ 问答频道 /正文

我创建了一个自定义Keras Conv2D层,如下所示:

class CustConv2D(Conv2D):

    def __init__(self, filters, kernel_size, kernelB=None, activation=None, **kwargs): 
        self.rank = 2
        self.num_filters = filters
        self.kernel_size = conv_utils.normalize_tuple(kernel_size, self.rank, 'kernel_size')
        self.kernelB = kernelB
        self.activation = activations.get(activation)

        super(CustConv2D, self).__init__(self.num_filters, self.kernel_size, **kwargs)

    def build(self, input_shape):
        if K.image_data_format() == 'channels_first':
            channel_axis = 1
        else:
            channel_axis = -1
        if input_shape[channel_axis] is None:
            raise ValueError('The channel dimension of the inputs '
                     'should be defined. Found `None`.')

        input_dim = input_shape[channel_axis]
        num_basis = K.int_shape(self.kernelB)[-1]

        kernel_shape = (num_basis, input_dim, self.num_filters)

        self.kernelA = self.add_weight(shape=kernel_shape,
                                      initializer=RandomUniform(minval=-1.0, 
                                      maxval=1.0, seed=None),
                                      name='kernelA',
                                      regularizer=self.kernel_regularizer,
                                      constraint=self.kernel_constraint)

        self.kernel = K.sum(self.kernelA[None, None, :, :, :] * self.kernelB[:, :, :, None, None], axis=2)

        # Set input spec.
        self.input_spec = InputSpec(ndim=self.rank + 2, axes={channel_axis: input_dim})
        self.built = True
        super(CustConv2D, self).build(input_shape)

我使用CustomConv2D作为模型的第一个Conv层。在

^{pr2}$

这个模型编译得很好;但是在训练时给了我以下错误。在

ValueError: An operation has None for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.

有没有办法找出哪个操作抛出了错误?另外,我编写自定义层的方式是否有任何实现错误?在


Tags: selfnoneinputsizechannelactivationkernelnum
2条回答

通过调用原始的Conv2D构建(您的self.kernel将被替换,然后self.kernelA将永远不会被使用,因此反向传播将永远不会到达它)。在

它也期待偏见和所有常规的东西:

class CustConv2D(Conv2D):

    def __init__(self, filters, kernel_size, kernelB=None, activation=None, **kwargs): 

        #...
        #...

        #don't use bias if you're not defining it:
        super(CustConv2D, self).__init__(self.num_filters, self.kernel_size, 
              activation=activation,
              use_bias=False, **kwargs)

        #bonus: don't forget to add the activation to the call above
        #it will also replace all your `self.anything` defined before this call   


    def build(self, input_shape):

        #...
        #...

        #don't use bias:
        self.bias = None

        #consider the layer built
        self.built = True

        #do not destroy your build
        #comment: super(CustConv2D, self).build(input_shape)

这可能是因为代码中有一些权重是在计算输出时未使用的。因此,其梯度与损失无关。在

这里有一个代码输出的例子:https://github.com/keras-team/keras/issues/12521#issuecomment-496743146

相关问题 更多 >