Eda
Eda

Reputation: 705

Getting error when using learningratescheduler with keras and SGD optimizer

I want to decrease the learning rate in each epoch. I am using Keras. I got this error when I run my code.


{Traceback (most recent call last):

  File "<ipython-input-1-2983b4be581f>", line 1, in <module>
    runfile('C:/Users/Gehan Mohamed/cnn_learningratescheduler.py', wdir='C:/Users/Gehan Mohamed')

  File "C:\Users\Gehan Mohamed\Anaconda3\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 96, in convert_to_eager_tensor
    return ops.EagerTensor(value, ctx.device_name, dtype)

ValueError: Attempt to convert a value (<keras.callbacks.callbacks.LearningRateScheduler object at 0x000001E7C7B8E780>) with an unsupported type (<class 'keras.callbacks.callbacks.LearningRateScheduler'>) to a Tensor.
Attempt to convert a value (<keras.callbacks.callbacks.LearningRateScheduler object at 0x000001E7C7B8E780>) with an unsupported type (<class 'keras.callbacks.callbacks.LearningRateScheduler'>) to a Tensor}. 

How can I solve this error?

def step_decay(epochs):
    if epochs <50:
        lrate=0.1
        return lrate
    if epochs >50:
        lrate=0.01
        return lrate            
            
lrate = LearningRateScheduler(step_decay)
sgd = SGD(lr=lrate, decay=0, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
callbacks_list = [lrate,callback]
filesPath=getFilesPathWithoutSeizure(i, indexPat)
history=model.fit_generator(generate_arrays_for_training(indexPat, filesPath, end=75), 
                                validation_data=generate_arrays_for_training(indexPat, filesPath, start=75),
                                steps_per_epoch=int((len(filesPath)-int(len(filesPath)/100*25))), 
                                validation_steps=int((len(filesPath)-int(len(filesPath)/100*75))),
                                verbose=2,
                                epochs=300, max_queue_size=2, shuffle=True, callbacks=callbacks_list)

Upvotes: 1

Views: 1499

Answers (2)

Dr. Snoopy
Dr. Snoopy

Reputation: 56357

In this part of the code:

lrate = LearningRateScheduler(step_decay)
sgd = SGD(lr=lrate, decay=0, momentum=0.9, nesterov=True)

You are setting the learning rate of SGD as the callback, that is incorrect, you should set an initial learning rate to SGD:

sgd = SGD(lr=0.01, decay=0, momentum=0.9, nesterov=True)

And pass the callback list to model.fit, maybe it's an artifact of a previous variable that you also called lrate.

Upvotes: 3

Vasanth Nag K V
Vasanth Nag K V

Reputation: 4988

You can do as shown below to reduce learning rate by a custom value after each epoch

def scheduler(epoch, lr):
  if epoch < 1:
    return lr
  else:
    return lr * tf.math.exp(-0.1)

Above is the function that reduces the learning rate, now this function should be called after each epoch. Below is the initialization of the function using LearningRateScheduler( you can check the documentation on tensorflow website for more details on this)

callback = tf.keras.callbacks.LearningRateScheduler(scheduler)

Now, let us call this from the fit method.

history = model.fit(trainGen, validation_data=valGen, validation_steps=val_split//batch_size, epochs=200, steps_per_epoch= train_split//batch_size, callbacks=[callback])

As you see above you just have to configure the initialed scheduler in the fit method and run it. You will notice after each epoch the learning rate keeps decreasing as per what you have set in the scheduler function.

Upvotes: 1

Related Questions