使用分类输入数据和图像输入数据进行分类

2024-09-28 20:53:03 发布

您现在位置:Python中文网/ 问答频道 /正文

我有一个大约300行的小数据集。每行有: A列:图像, B列:分类文本输入, C列:分类文本输入, D列:分类文本输出

我能够单独在图像输入数据(a列)上使用顺序Keras模型来预测输出(D列),但精度非常差(约40%)。如何将图像数据与分类输入数据结合起来以获得更好的准确性

下面是我正在使用的代码。我在model.fit上得到错误: ValueError:无法将字符串转换为浮点:“item1”

我使用的数据中没有数字,所有内容都是分类文本。我认为我需要在“y”的模型中改变一些东西,以便它知道预测是分类的,而不是数字的。但我不确定该改变什么

drive.mount('/content/gdrive/')
train = pd.read_csv(r'gdrive/My Drive/Colab Notebooks/Fast AI/testfilled.csv')
df = pd.DataFrame(train)
df = df[['Column A', 'Column B', 'Column C', 'Column D']]

def process_categorical_attributes(df, train, test):
  zipBinarizer = LabelBinarizer().fit(df["Column B"])
  trainCategorical = zipBinarizer.transform(train["Column B"])
  testCategorical = zipBinarizer.transform(test["Column B"])

  zipBinarizer2 = LabelBinarizer().fit(df["Column C"])
  trainCategorical2 = zipBinarizer.transform(train["Column C"])
  testCategorical2 = zipBinarizer.transform(test["Column C"])

  trainX = np.hstack([trainCategorical, trainCategorical2])
  testX = np.hstack([testCategorical, testCategorical2])
  return (trainX, testX)

def load_piece_images(df):
  train_image = []
  for i in tqdm(range(train.shape[0])):
    img = image.load_img('gdrive/My Drive/Colab Notebooks/OutputDir/' + train['FileName'][i] + '.bmp',target_size=(400,400,3))
    img = image.img_to_array(img)
    img = img/255   
    train_image.append(img)
  return np.array(train_image)

def create_mlp(dim, regress=False):  
  model = Sequential()
  model.add(Dense(8, input_dim=dim, activation="relu"))
  model.add(Dense(4, activation="relu"))
  if regress:
    model.add(Dense(1, activation="linear"))
  return model

def create_cnn(width, height, depth, filters=(16, 32, 64), regress=False):
    inputShape = (height, width, depth)
    chanDim = -1
    inputs = Input(shape=inputShape)
    for (i, f) in enumerate(filters):
        if i == 0:
            x = inputs
        x = Conv2D(f, (3, 3), padding="same")(x)
        x = Activation("relu")(x)
        x = BatchNormalization(axis=chanDim)(x)
        x = MaxPooling2D(pool_size=(2, 2))(x)
    x = Flatten()(x)
    x = Dense(16)(x)
    x = Activation("relu")(x)
    x = BatchNormalization(axis=chanDim)(x)
    x = Dropout(0.5)(x)
    x = Dense(4)(x)
    x = Activation("relu")(x)
    if regress:
        x = Dense(1, activation="linear")(x)
    model = Model(inputs, x)
    return model

images = load_piece_images(df)
split = train_test_split(df, images, test_size=0.25, random_state=42)
(trainAttrX, testAttrX, trainImagesX, testImagesX) = split

trainY = trainAttrX["Column D"]
testY = testAttrX["Column D"]
(trainAttrX, testAttrX) = process_categorical_attributes(df, trainAttrX, testAttrX)

mlp = create_mlp(trainAttrX.shape[1], regress=False)
cnn = create_cnn(400, 400, 3, regress=False)
combinedInput = concatenate([mlp.output, cnn.output])
x = Dense(4, activation="relu")(combinedInput)
x = Dense(1, activation="linear")(x)
x = Dense(1, activation='sigmoid')(x)
model = Model(inputs=[mlp.input, cnn.input], outputs=x)

opt = Adam(lr=1e-3, decay=1e-3 / 200)
model.compile(loss="mean_absolute_percentage_error", optimizer=opt)
model.fit(
    [trainAttrX, trainImagesX], trainY,
    validation_data=([testAttrX, testImagesX], testY),
    epochs=20, batch_size=2)

Tags: 数据testimagedfimgmodel分类column
2条回答

有时使用的另一个选项(例如,在条件GANs和AlphaFold 2中)是将分类数据编码为输入图像中的额外标量特征通道。因此,例如,如果对类别1进行热编码,则采用类似于[0,1,0,…]的向量,并扩展RGB通道,其中一个通道充满0,另一个通道充满1,另一个通道充满0,等等

与其他方法(将分类特征编码连接到神经嵌入)相比的优势在于,神经网络本身可以看到更多的特征,在某些情况下,当RGB通道没有足够的信息来区分感兴趣的类别时,这是必要的。缺点是它的计算效率和内存效率都不高

本教程很好地解释了如何使用多个输入源(文本+图像数据):https://www.pyimagesearch.com/2019/02/04/keras-multiple-inputs-and-mixed-data/

从本质上说,这正是你所寻找的

相关问题 更多 >