如何在Keras中将输入分为不同的通道

2024-05-21 05:31:01 发布

您现在位置:Python中文网/ 问答频道 /正文

我有20个通道数据,每个通道有5000个值(总共有150000多个记录存储为HD上的.npy文件)。

我正在遵循https://stanford.edu/~shervine/blog/keras-how-to-generate-data-on-the-fly.html上提供的keras fit_generator教程来读取数据(每个记录都被读取为float32类型的(5000,20)numpy数组)。

我已经理论化的网络,每个通道都有并行卷积网络,它们在末端连接,因此需要并行地馈送数据。 仅从数据中读取和馈送单个通道,并馈送到单个网络是成功的

def __data_generation(self, list_IDs_temp):
    'Generates data containing batch_size samples' # X : (n_samples, *dim, n_channels)
    # Initialization
    if(self.n_channels == 1):
        X = np.empty((self.batch_size, *self.dim))
    else:
        X = np.empty((self.batch_size, *self.dim, self.n_channels))
    y = np.empty((self.batch_size), dtype=int)

    # Generate data
    for i, ID in enumerate(list_IDs_temp):
        # Store sample
        d = np.load(self.data_path + ID + '.npy')
        d = d[:, self.required_channel]
        d = np.expand_dims(d, 2)
        X[i,] = d

        # Store class
        y[i] = self.labels[ID]

    return X, keras.utils.to_categorical(y, num_classes=self.n_classes)

然而,当读取整个记录并尝试通过使用Lambda层进行切片将其馈送到网络时,我得到

阅读整个记录

 X[i,] = np.load(self.data_path + ID + '.npy')

使用位于:https://github.com/keras-team/keras/issues/890的Lambda切片层实现并调用

input = Input(shape=(5000, 20))
slicedInput = crop(2, 0, 1)(input)

我能够编译模型,它显示了预期的层大小。

当数据传送到这个网络时

ValueError: could not broadcast input array from shape (5000,20) into shape (5000,1)

任何帮助都将不胜感激。。。。


Tags: 数据self网络idinputdatasizenp
1条回答
网友
1楼 · 发布于 2024-05-21 05:31:01

如您所引用的Githubthread中所述,Lambda层只能返回一个输出,因此建议的^{}只返回一个“从开始到结束给定维度上的张量”。

我相信你想达到的目标可以这样做:

from keras.layers import Dense, Concatenate, Input, Lambda
from keras.models import Model

num_channels = 20
input = Input(shape=(5000, num_channels))

branch_outputs = []
for i in range(num_channels):
    # Slicing the ith channel:
    out = Lambda(lambda x: x[:, i])(input)

    # Setting up your per-channel layers (replace with actual sub-models):
    out = Dense(16)(out)
    branch_outputs.append(out)

# Concatenating together the per-channel results:
out = Concatenate()(branch_outputs)

# Adding some further layers (replace or remove with your architecture):
out = Dense(10)(out)

# Building model:
model = Model(inputs=input, outputs=out)    
model.compile(optimizer=keras.optimizers.Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])

# --------------
# Generating dummy data:
import numpy as np
data = np.random.random((64, 5000, num_channels))
targets = np.random.randint(2, size=(64, 10))

# Training the model:
model.fit(data, targets, epochs=2, batch_size=32)
# Epoch 1/2
# 32/64 [==============>...............] - ETA: 1s - loss: 37.1219 - acc: 0.1562
# 64/64 [==============================] - 2s 27ms/step - loss: 38.4801 - acc: 0.1875
# Epoch 2/2
# 32/64 [==============>...............] - ETA: 0s - loss: 38.9541 - acc: 0.0938
# 64/64 [==============================] - 0s 4ms/step - loss: 36.0179 - acc: 0.1875

相关问题 更多 >