“Oserror无法识别图像文件”无法从折叠中加载文件

2024-10-03 19:21:55 发布

您现在位置:Python中文网/ 问答频道 /正文

我在做一个项目,训练一个可以对浮游生物图像进行分类的模型。但是,当我执行代码并运行它时,它会说我的图像位于无法识别的文件夹中。首先,我觉得文件名有问题。我把所有的jpg都改成了png。但一切都没有改变。文件夹上的每一张图片都能正常工作。
我没有使用PIL,所以我仍在试图找出我的问题所在,但什么也没有改变了。那个文件夹存在,镜像正常。
这是我的代码:

import sys
import os
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers
from keras.models import Sequential
from keras.layers import Dropout, Flatten, Dense, Activation
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras import callbacks


train_data_path = '/media/sexybeam/Suzuya/Study/Group-Project/Python/train et test/data/train'
validation_data_path = '/media/sexybeam/Suzuya/Study/Group-Project/Python/train et test/data/validation'

"""
Parameters
"""
img_width, img_height = 128, 128
batch_size = 16
samples_per_epoch = 1000
validation_steps = 300
nb_filters1 = 32
nb_filters2 = 64
conv1_size = 3
conv2_size = 2
pool_size = 2
classes_num = 3 ## change this number with number of plankton folder you have
lr = 0.0004
epochs = 20

model = Sequential()
model.add(Convolution2D(nb_filters1, conv1_size, conv1_size, border_mode ="same", input_shape=(img_width, img_height, 3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(pool_size, pool_size)))

model.add(Convolution2D(nb_filters2, conv2_size, conv2_size, border_mode ="same"))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(pool_size, pool_size), dim_ordering='th'))

model.add(Flatten())
model.add(Dense(256))
model.add(Activation("relu"))
model.add(Dropout(0.5))
model.add(Dense(classes_num, activation='softmax'))

model.compile(loss='categorical_crossentropy',
              optimizer=optimizers.RMSprop(lr=lr),
              metrics=['accuracy'])
print(model.summary())

train_datagen = ImageDataGenerator(rescale=1. / 255)
test_datagen = ImageDataGenerator(rescale=1. / 255)

train_generator = train_datagen.flow_from_directory(
    train_data_path,
    target_size=(img_height, img_width),
    batch_size=batch_size,
    class_mode='categorical')

validation_generator = test_datagen.flow_from_directory(
    validation_data_path,
    target_size=(img_height, img_width),
    batch_size=batch_size,
    class_mode='categorical')

"""
Tensorboard log
"""
log_dir = '/media/sexybeam/Suzuya/Study/Group-Project/Python/train et test/data/validation'
tb_cb = callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
cbks = [tb_cb]

model.fit_generator(
    train_generator,
    steps_per_epoch=samples_per_epoch,
    epochs=epochs,
    validation_data=validation_generator,
    callbacks=cbks,
    validation_steps=validation_steps)

target_dir = '/media/sexybeam/Suzuya/Study/Group-Project/Python/train et test/models'
if not os.path.exists(target_dir):
  os.mkdir(target_dir)
model.save('/media/sexybeam/Suzuya/Study/Group-Project/Python/train et test/models/model.h5')
model.save_weights('/media/sexybeam/Suzuya/Study/Group-Project/Python/train et test/models/weights.h5')

我的错误跟踪

^{pr2}$

Tags: fromtestimportaddimgdatasizemodel
1条回答
网友
1楼 · 发布于 2024-10-03 19:21:55

你的代码和图像没有问题。我也有同样的问题。这个问题是由于KerasImageDataGenerator造成的。如果您使用tensorflow cpu版本,它将工作。但是tensorflow gpu不能工作。如果你想节省时间,那就忘掉ImageDataGenerator,自己去读图像吧,我不记得到底是什么原因造成的。在

scikit image是一个很好的处理图像的包。您可以这样读取文件:

from skimage.io import imread
from skimage.transform import resize
from skimage import img_as_float

#place holder for images
train_Images = numpy.zeros((sample_counts, height, width, channels))

for index,file in enumerate(ListOfFileNames):
    img = io.imread(file)
    #In case you want to resize images use the line blow
    img = resize(img, (height,width,channels), mode='reflect', anti_aliasing=True)
    train_Images[index] = img_as_float(img)

相关问题 更多 >