kerastuner调整层数会创建与报告的层数不同的层数

2024-06-24 12:02:09 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在使用kerastuner.tuner.RandomSearch调整超参数,包括模型的层数。但报告的层数与创建的层数不同。例如

Search: Running Trial #1

Hyperparameter    |Value             |Best Value So Far 
num_layers_conv   |7                 |?                 
conv0_filters     |32                |?                 
conv0_kernel_size |5                 |?                 
pool0_pool_size   |6                 |?                 
conv1_filters     |64                |?                 
conv1_kernel_size |6                 |?                 
pool1_pool_size   |6                 |?                 
num_layers_dense  |8                 |?                 
dense0_units      |32                |?                 
dense1_units      |256               |?                 

在上述试验中,据报告,conv层的数量为7层,但仅创建了3层,层的数量为8层,但仅创建了两层

下面是我的builded_model函数的代码

def build_model(hp):
    model = models.Sequential()

    num_conv = hp.Int('num_layers_conv', 2, 8)

    model.add(layers.Conv2D(filters=hp.Choice('conv0_filters', [32, 64, 128]),
                            kernel_size=hp.Int('conv0_kernel_size', min_value=3, max_value=9),
                            activation='relu',
                            input_shape=(200, 200, 1),
                            padding='same'))
    model.add(layers.MaxPooling2D(pool_size=hp.Int('pool0_pool_size', min_value=2, max_value=7),
                                  padding='same'))
    for i in range(1, num_conv):
        model.add(layers.Conv2D(filters=hp.Choice(f'conv{i}_filters', [32, 64, 128]),
                                kernel_size=hp.Int(f'conv{i}_kernel_size', min_value=3, max_value=9),
                                activation='relu',
                                padding='same'))
        model.add(layers.MaxPooling2D(pool_size=hp.Int(f'pool{i}_pool_size', min_value=2, max_value=7),
                                      padding='same'))
    
    model.add(layers.Flatten())

    num_dense = hp.Int('num_layers_dense', 2, 8)

    for i in range(num_dense):
        model.add(layers.Dense(units=hp.Choice(f'dense{i}_units', [32, 64, 128, 256, 512]),
                               activation='relu'))
        
    model.add(layers.Dense(29, activation='sigmoid'))
    
    model.compile(optimizer='rmsprop',
              loss='categorical_crossentropy',
              metrics=['accuracy'])

    return model

我是做错了什么还是kerastuner中的一个bug