回答此问题可获得 20 贡献值,回答如果被采纳可获得 50 分。
<p>新手到机器学习这里。
我目前正在研究一个诊断机器学习框架,在功能磁共振成像上使用3D CNN。我的数据集目前由636张图像组成,我正在尝试区分控制和受影响(二进制分类)。然而,当我尝试训练我的模型时,在每一个纪元之后,无论我做什么,我的准确率都保持在48.13%。此外,随着时间的推移,精确度从56%下降到48.13%。
到目前为止,我已经尝试:</p>
<ul>
<li>更改我的损失函数(泊松、分类交叉熵、二进制交叉熵、稀疏分类交叉熵、均方误差、平均绝对误差、铰链、铰链平方)</li>
<li>更改我的优化器(我尝试了Adam和SGD)</li>
<li>更改层数</li>
<li>使用权重正则化</li>
<li>从ReLU更换为泄漏的ReLU(我认为如果这是一个过度安装的情况,可能会有所帮助)</li>
</ul>
<p>到目前为止,一切都不起作用</p>
<p>有什么建议吗?这是我的密码:</p>
<pre><code>#importing important packages
import tensorflow as tf
import os
import keras
from keras.models import Sequential
from keras.layers import Dense, Flatten, Conv3D, MaxPooling3D, Dropout, BatchNormalization, LeakyReLU
import numpy as np
from keras.regularizers import l2
from sklearn.utils import compute_class_weight
from keras.optimizers import SGD
BATCH_SIZE = 64
input_shape=(64, 64, 40, 20)
# Create the model
model = Sequential()
model.add(Conv3D(64, kernel_size=(3,3,3), activation='relu', input_shape=input_shape, kernel_regularizer=l2(0.005), bias_regularizer=l2(0.005), data_format = 'channels_first', padding='same'))
model.add(MaxPooling3D(pool_size=(2, 2, 2)))
model.add(Conv3D(64, kernel_size=(3,3,3), activation='relu', input_shape=input_shape, kernel_regularizer=l2(0.005), bias_regularizer=l2(0.005), data_format = 'channels_first', padding='same'))
model.add(MaxPooling3D(pool_size=(2, 2, 2)))
model.add(BatchNormalization(center=True, scale=True))
model.add(Conv3D(64, kernel_size=(3,3,3), activation='relu', input_shape=input_shape, kernel_regularizer=l2(0.005), bias_regularizer=l2(0.005), data_format = 'channels_first', padding='same'))
model.add(MaxPooling3D(pool_size=(2, 2, 2)))
model.add(Conv3D(64, kernel_size=(3,3,3), activation='relu', input_shape=input_shape, kernel_regularizer=l2(0.005), bias_regularizer=l2(0.005), data_format = 'channels_first', padding='same'))
model.add(MaxPooling3D(pool_size=(2, 2, 2)))
model.add(BatchNormalization(center=True, scale=True))
model.add(Flatten())
model.add(BatchNormalization(center=True, scale=True))
model.add(Dense(128, activation='relu', kernel_regularizer=l2(0.01), bias_regularizer=l2(0.01)))
model.add(Dropout(0.5))
model.add(Dense(128, activation='sigmoid', kernel_regularizer=l2(0.01), bias_regularizer=l2(0.01)))
model.add(Dense(1, activation='softmax', kernel_regularizer=l2(0.01), bias_regularizer=l2(0.01)))
# Compile the model
model.compile(optimizer = keras.optimizers.sgd(lr=0.000001), loss='poisson', metrics=['accuracy', tf.keras.metrics.Precision(), tf.keras.metrics.Recall()])
# Model Testing
history = model.fit(X_train, y_train, batch_size=BATCH_SIZE, epochs=50, verbose=1, shuffle=True)
</code></pre>