Tensorflow 2.0:带重塑的形状推断返回无维度

2024-09-29 17:15:17 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在使用Tensorflow 2.0+Keras上的CNN-LSTM模型来执行序列分类。我的模型定义如下:

    inp = Input(input_shape)
    rshp = Reshape((input_shape[0]*input_shape[1], 1), input_shape=input_shape)(inp)
    cnn1 = Conv1D(100, 9, activation='relu')(rshp)
    cnn2 = Conv1D(100, 9, activation='relu')(cnn1)
    mp1 = MaxPooling1D((3,))(cnn2)
    cnn3 = Conv1D(50, 3, activation='relu')(mp1)
    cnn4 = Conv1D(50, 3, activation='relu')(cnn3)
    gap1 = AveragePooling1D((3,))(cnn4)
    dropout1 = Dropout(rate=dropout[0])(gap1)
    flt1 = Flatten()(dropout1)
    rshp2 = Reshape((input_shape[0], -1), input_shape=flt1.shape)(flt1)
    bilstm1 = Bidirectional(LSTM(240,
                                 return_sequences=True,
                                 recurrent_dropout=dropout[1]),
                            merge_mode=merge)(rshp2)
    dense1 = TimeDistributed(Dense(30, activation='relu'))(rshp2)
    dropout2 = Dropout(rate=dropout[2])(dense1)
    prediction = TimeDistributed(Dense(1, activation='sigmoid'))(dropout2)

    model = Model(inp, prediction, name="CNN-bLSTM_per_segment")
    print(model.summary(line_length=75))

其中input_shape = (60, 60)。但是,此定义会引发以下错误:

TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'

起初,我认为这是因为rshp2层无法将flt1输出重新塑造为(60, X)。因此,我在Bidirectional(LSTM))层之前添加了一个打印块:

    print('reshape1: ', rshp.shape)
    print('cnn1: ', cnn1.shape)
    print('cnn2: ', cnn2.shape)
    print('mp1: ', mp1.shape)
    print('cnn3: ', cnn3.shape)
    print('cnn4: ', cnn4.shape)
    print('gap1: ', gap1.shape)
    print('flatten 1: ', flt1.shape)
    print('reshape 2: ', rshp2.shape)

形状是:

reshape 1:  (None, 3600, 1)
cnn1:  (None, 3592, 100)
cnn2:  (None, 3584, 100)
mp1:  (None, 1194, 100)
cnn3:  (None, 1192, 50)
cnn4:  (None, 1190, 50)
gap1:  (None, 396, 50)
flatten 1:  (None, 19800)
reshape 2:  (None, 60, None)

查看flt1层,其输出形状为(19800,),可以将其重塑为(60, 330),但是由于某些原因rshp2层的(60, -1)没有按预期工作,这可以通过打印reshape 2: (None, 60, None)证明。当我尝试将其重塑为(60, 330)时,效果很好。有人知道(-1)为什么不工作吗


Tags: noneinputactivationreluprintshapeconv1dgap1
1条回答
网友
1楼 · 发布于 2024-09-29 17:15:17

-1正在工作

来自Reshape文档,https://www.tensorflow.org/api_docs/python/tf/keras/layers/Reshape

该层返回一个形状为(batch_size,) + target_shape的张量

因此,批大小保持不变,其他维度是基于target_shape计算的

从文档中,看最后一个示例

# also supports shape inference using `-1` as dimension
model.add(tf.keras.layers.Reshape((-1, 2, 2)))
model.output_shape

(None, None, 2, 2)

如果在目标形状中传递-1,则KERA将存储None,如果希望在该轴中使用可变长度数据,则此功能非常有用,但如果数据形状始终相同,则只需将尺寸硬编码,以便在以后打印形状时放置尺寸

注意:也不需要为函数API中的中间层指定input_shape=input_shape。模型将为您推断出这一点

相关问题 更多 >

    热门问题