警告:tensorflow:模型是用输入张量的形状(20,37,42)构建的(“输入_5:0”,形状=(20,37,42),数据类型=float32),但是

2024-05-20 01:06:45 发布

您现在位置:Python中文网/ 问答频道 /正文

警告:tensorflow:为输入张量(“input_5:0”,shape=(20,37,42),dtype=float32)构造了形状(20,37,42)的模型,但在形状不兼容的输入上调用了该模型(无,37)

你好!!这里是深入学习的地方。。。我在使用LSTM层时遇到问题。 输入是一个长度为37的浮点数组,包含2个浮点,长度为35的一个热数组转换为浮点。输出是长度为19的数组,包含0和1。正如标题所示,我在重塑输入数据以适应模型方面遇到了困难,我甚至不确定哪些输入维度会被视为“兼容”

import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers


import random
inputs, outputs = [], []
for x in range(10000):
    tempi, tempo = [], []
    tempi.append(random.random() - 0.5)
    tempi.append(random.random() - 0.5)
    for x2 in range(35):
        if random.random() > 0.5:
            tempi.append(1.)
        else:
            tempi.append(0.)
    for x2 in range(19):
        if random.random() > 0.5:
            tempo.append(1.)
        else:
            tempo.append(0.)
    inputs.append(tempi)
    outputs.append(tempo)

batch = 20
timesteps = 42
training_units = 0.85

cutting_point_i = int(len(inputs)*training_units)
cutting_point_o = int(len(outputs)*training_units)
x_train, x_test = np.asarray(inputs[:cutting_point_i]), np.asarray(inputs[cutting_point_i:])
y_train, y_test = np.asarray(outputs[:cutting_point_o]), np.asarray(outputs[cutting_point_o:])

input_layer = keras.Input(shape=(37,timesteps),batch_size=batch)
dense = layers.LSTM(150, activation="sigmoid", return_sequences=True)
x = dense(input_layer)
hidden_layer_2 = layers.LSTM(150, activation="sigmoid", return_sequences=True)(x)
output_layer = layers.Dense(10, activation="softmax")(hidden_layer_2)
model = keras.Model(inputs=input_layer, outputs=output_layer, name="my_model"

Tags: importlayerinputlayerstensorflownprandomoutputs
2条回答

模型的正确输入为(20、37、42)。 注意:这里20是您明确指定的批次大小

代码:

import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

batch = 20
timesteps = 42
training_units = 0.85

x1 = tf.constant(np.random.randint(50, size =(1000,37, 42)), dtype = tf.float32)
y1 = tf.constant(np.random.randint(10, size =(1000,)), dtype = tf.int32)
 

input_layer = keras.Input(shape=(37,timesteps),batch_size=batch)
dense = layers.LSTM(150, activation="sigmoid", return_sequences=True)
x = dense(input_layer)
hidden_layer_2 = layers.LSTM(150, activation="sigmoid", return_sequences=True)(x)
hidden_layer_3 = layers.Flatten()(hidden_layer_2)
output_layer = layers.Dense(10, activation="softmax")(hidden_layer_3)
model = keras.Model(inputs=input_layer, outputs=output_layer, name="my_model")

model.compile(optimizer='adam',
                  loss='sparse_categorical_crossentropy',
                  metrics=['accuracy'])
tf.keras.utils.plot_model(model, 'my_first_model.png', show_shapes=True)

模型架构:

Model

您可以清楚地看到输入大小

要运行的代码:

model.fit(x = x1, y = y1, batch_size = batch, epochs = 10)

注意:无论指定的批次大小如何,都必须在model.fit()命令中指定相同的批次大小

输出:

Epoch 1/10
50/50 [==============================] - 4s 89ms/step - loss: 2.3288 - accuracy: 0.0920
Epoch 2/10
50/50 [==============================] - 5s 91ms/step - loss: 2.3154 - accuracy: 0.1050
Epoch 3/10
50/50 [==============================] - 5s 101ms/step - loss: 2.3114 - accuracy: 0.0900
Epoch 4/10
50/50 [==============================] - 5s 101ms/step - loss: 2.3036 - accuracy: 0.1060
Epoch 5/10
50/50 [==============================] - 5s 99ms/step - loss: 2.2998 - accuracy: 0.1000
Epoch 6/10
50/50 [==============================] - 4s 89ms/step - loss: 2.2986 - accuracy: 0.1170
Epoch 7/10
50/50 [==============================] - 4s 84ms/step - loss: 2.2981 - accuracy: 0.1300
Epoch 8/10
50/50 [==============================] - 5s 103ms/step - loss: 2.2950 - accuracy: 0.1290
Epoch 9/10
50/50 [==============================] - 5s 106ms/step - loss: 2.2960 - accuracy: 0.1210
Epoch 10/10
50/50 [==============================] - 5s 97ms/step - loss: 2.2874 - accuracy: 0.1210

这里有几个问题

  • 您的输入没有时间步长,您需要输入形状(n, time steps, features)
  • input_shape中,时间步长维度位于第一位,而不是最后一位
  • 上一个LSTM层返回序列,因此无法将其与0和1进行比较

我所做的:

  • 我在数据中添加了时间步长(7)
  • 我在input_shape中排列了维度
  • 我设置了最后的return_sequences=False

完全修复了生成数据的示例:

import numpy as np
from tensorflow import keras
from tensorflow.keras import layers

batch = 20
n_samples = 1000
timesteps = 7
features = 10

x_train = np.random.rand(n_samples, timesteps, features)
y_train = keras.utils.to_categorical(np.random.randint(0, 10, n_samples))

input_layer = keras.Input(shape=(timesteps, features),batch_size=batch)
dense = layers.LSTM(16, activation="sigmoid", return_sequences=True)(input_layer)
hidden_layer_2 = layers.LSTM(16, activation="sigmoid", return_sequences=False)(dense)
output_layer = layers.Dense(10, activation="softmax")(hidden_layer_2)
model = keras.Model(inputs=input_layer, outputs=output_layer, name="my_model")

model.compile(loss='categorical_crossentropy', optimizer='adam')

history = model.fit(x_train, y_train)
Train on 1000 samples
  20/1000 [..............................] - ETA: 2:50 - loss: 2.5145
 200/1000 [=====>........................] - ETA: 14s - loss: 2.3934 
 380/1000 [==========>...................] - ETA: 5s - loss: 2.3647 
 560/1000 [===============>..............] - ETA: 2s - loss: 2.3549
 740/1000 [=====================>........] - ETA: 1s - loss: 2.3395
 900/1000 [==========================>...] - ETA: 0s - loss: 2.3363
1000/1000 [==============================] - 4s 4ms/sample - loss: 2.3353

相关问题 更多 >