检查输入时出错:预期conv1d_57_输入有3个维度,但得到了形状为(152,64)的数组

2024-09-30 08:38:00 发布

您现在位置:Python中文网/ 问答频道 /正文

我得到了这个错误:

ValueError: Error when checking input: expected conv1d_57_input to have 3 dimensions, but got array with shape (152, 64).

我的代码:

model = Sequential()

model.add(Conv1D(filters=64, kernel_size=3, activation='relu', input_shape=(152,64)))
model.add(Conv1D(filters=64, kernel_size=3, activation='relu'))
model.add(Dropout(0.5))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(100, activation='relu'))
model.add(Dense(4, activation='softmax'))

model.summary()

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

history = model.fit(trainingMatrix, labelTraining, validation_data=(validationMatrix, labelValidation), epochs=3)

变量说明:

trainingMatrix.shape = (152,64);行与具有特征的样本和列相关联

这是一个重塑的问题吗

编辑:

我做了以下更改:

trainingMatrix = np.expand_dims(trainingMatrix, axis=3)
validationMatrix = np.expand_dims(validationMatrix, axis=3)

model = Sequential()
model.add(Conv1D(filters=64, kernel_size=3, activation='relu', input_shape=(64,1)))
model.add(Conv1D(filters=64, kernel_size=3, activation='relu'))
model.add(Dropout(0.5))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(100, activation='relu'))
model.add(Dense(4, activation='softmax'))
model.summary()

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

history = model.fit(trainingMatrix, labelTraining, validation_data=(validationMatrix, labelValidation), epochs=3)

我得到了这个新错误:检查目标时出错:预期密集型_28具有形状(1),但得到了具有形状(4)的数组

我的总结:

_________________________________________________________________
Layer (type)                 Output Shape              Param    
=================================================================
conv1d_51 (Conv1D)           (None, 62, 64)            256       
_________________________________________________________________
conv1d_52 (Conv1D)           (None, 60, 64)            12352     
_________________________________________________________________
dropout_15 (Dropout)         (None, 60, 64)            0         
_________________________________________________________________
max_pooling1d_15 (MaxPooling (None, 30, 64)            0         
_________________________________________________________________
flatten_16 (Flatten)         (None, 1920)              0         
_________________________________________________________________
dense_27 (Dense)             (None, 100)               192100    
_________________________________________________________________
dense_28 (Dense)             (None, 4)                 404       
=================================================================
Total params: 205,112
Trainable params: 205,112
Non-trainable params: 0

新代码和新错误:

trainingMatrix = np.expand_dims(trainingMatrix, axis=0)
validationMatrix = np.expand_dims(validationMatrix, axis=0)

model = Sequential()

model.add(Conv1D(filters=64, kernel_size=3, activation='relu', input_shape=(152,64,1)))
model.add(Conv1D(filters=64, kernel_size=3, activation='relu'))
model.add(Dropout(0.5))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(100, activation='relu'))
model.add(Dense(4, activation='softmax'))
model.summary()

ValueError:输入0与层conv1d_57不兼容:预期ndim=3,发现ndim=4

下面的解决方案有效,但命中率太低。是否有人建议对配置进行改进?我没有达到超过20%的准确率。(使用MLP我得到了90%)

trainingMatrix = np.expand_dims(trainingMatrix, axis=3)
validationMatrix = np.expand_dims(validationMatrix, axis=3)

model = Sequential()

model.add(Conv1D(filters=64, kernel_size=3, activation='relu', input_shape=(64,1)))
model.add(Conv1D(filters=64, kernel_size=3, activation='relu'))
model.add(Dropout(0.5))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(100, activation='relu'))
model.add(Dense(4, activation='softmax'))

model.summary()

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

history = model.fit(trainingMatrix, labelTraining, validation_data=(validationMatrix, labelValidation), epochs=1000)

我的labelTraining是:

1 0 0 0
1 0 0 0
...
0 1 0 0
0 1 0 0
...
0 0 0 1

可以吗


Tags: noneaddinputsizemodelnpactivationkernel
2条回答

谢谢大家的帮助。按照代码工作,准确率为97%

trainingMatrix = np.expand_dims(trainingMatrix, axis=3)
validationMatrix = np.expand_dims(validationMatrix, axis=3)

model = Sequential()

model.add(Conv1D(filters=64, kernel_size=3, activation='relu', input_shape=(64,1)))
model.add(Conv1D(filters=64, kernel_size=3, activation='relu'))
model.add(Flatten())
model.add(Dense(4, activation='softmax'))

model.summary()

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

history = model.fit(trainingMatrix, labelTraining, batch_size=batchSize, epochs=epochs, verbose=1, validation_data=(validationMatrix, labelValidation))

稀疏\u分类\u交叉熵&;分类交叉熵

稀疏\u分类\u交叉熵:

is used when target col contain multilabel value and having single target column{It mean label can be greater than 1}

范畴交叉熵

is used when target col contain mutilabel value and having muti target col{ it mean each col target value is binary}

用分类交叉熵替换稀疏的分类交叉熵

相关问题 更多 >

    热门问题