将learningratescheduler与keras和SGD optimizer一起使用。如何解决此错误?

2024-09-29 19:21:11 发布

您现在位置:Python中文网/ 问答频道 /正文

我想降低每个时代的学习速度。我正在使用keras。我在运行代码时出错


{Traceback (most recent call last):

  File "<ipython-input-1-2983b4be581f>", line 1, in <module>
    runfile('C:/Users/Gehan Mohamed/cnn_learningratescheduler.py', wdir='C:/Users/Gehan Mohamed')

  File "C:\Users\Gehan Mohamed\Anaconda3\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 96, in convert_to_eager_tensor
    return ops.EagerTensor(value, ctx.device_name, dtype)

ValueError: Attempt to convert a value (<keras.callbacks.callbacks.LearningRateScheduler object at 0x000001E7C7B8E780>) with an unsupported type (<class 'keras.callbacks.callbacks.LearningRateScheduler'>) to a Tensor.
Attempt to convert a value (<keras.callbacks.callbacks.LearningRateScheduler object at 0x000001E7C7B8E780>) with an unsupported type (<class 'keras.callbacks.callbacks.LearningRateScheduler'>) to a Tensor}. 

如何解决此错误

def step_decay(epochs):
    if epochs <50:
        lrate=0.1
        return lrate
    if epochs >50:
        lrate=0.01
        return lrate            

lrate = LearningRateScheduler(step_decay)
sgd = SGD(lr=lrate, decay=0, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
callbacks_list = [lrate,callback]
filesPath=getFilesPathWithoutSeizure(i, indexPat)
history=model.fit_generator(generate_arrays_for_training(indexPat, filesPath, end=75), 
                                validation_data=generate_arrays_for_training(indexPat, filesPath, start=75),
                                steps_per_epoch=int((len(filesPath)-int(len(filesPath)/100*25))), 
                                validation_steps=int((len(filesPath)-int(len(filesPath)/100*75))),
                                verbose=2,
                                epochs=300, max_queue_size=2, shuffle=True, callbacks=callbacks_list)

Tags: toconvertlenreturnvalueuserskerasint
2条回答

在守则的这部分:

lrate = LearningRateScheduler(step_decay)
sgd = SGD(lr=lrate, decay=0, momentum=0.9, nesterov=True)

您将SGD的学习率设置为回调,这是不正确的,您应该将初始学习率设置为SGD:

sgd = SGD(lr=0.01, decay=0, momentum=0.9, nesterov=True)

并将回调列表传递给model.fit,可能是前一个变量的工件,您也将其称为lrate

您可以按如下所示在每个历元后通过自定义值来降低学习速率

def scheduler(epoch, lr):
  if epoch < 1:
    return lr
  else:
    return lr * tf.math.exp(-0.1)

以上是降低学习率的函数,现在应该在每个历元后调用此函数。下面是使用LearningRateScheduler初始化函数的过程(您可以查看tensorflow网站上的文档以了解更多详细信息)

callback = tf.keras.callbacks.LearningRateScheduler(scheduler)

现在,让我们从fit方法中调用它

history = model.fit(trainGen, validation_data=valGen, validation_steps=val_split//batch_size, epochs=200, steps_per_epoch= train_split//batch_size, callbacks=[callback])

如上所述,您只需在fit方法中配置初始化的调度程序并运行它。您将注意到,在每个历元之后,学习速率会根据您在调度程序功能中设置的值不断降低

相关问题 更多 >

    热门问题