Tonz
Tonz

Reputation: 187

Optuna Pytorch: returned value from the objective function cannot be cast to float

def autotune(trial):

      cfg= { 'device' : "cuda" if torch.cuda.is_available() else "cpu",
         #   'train_batch_size' : 64,
         #   'test_batch_size' : 1000,
         #   'n_epochs' : 1,
         #   'seed' : 0,
         #   'log_interval' : 100,
         #   'save_model' : False,
         #   'dropout_rate' : trial.suggest_uniform('dropout_rate',0,1.0),
            'lr' : trial.suggest_loguniform('lr', 1e-3, 1e-2),
            'momentum' : trial.suggest_uniform('momentum', 0.4, 0.99),
            'optimizer': trial.suggest_categorical('optimizer',[torch.optim.Adam,torch.optim.SGD, torch.optim.RMSprop, torch.optim.$
            'activation': F.tanh}
      optimizer = cfg['optimizer'](model.parameters(), lr=cfg['lr'])
      #optimizer = torch.optim.Adam(model.parameters(),lr=0.001

As u can see above , I am trying to run Optuna trials to search for the most optimal hyper parameters for my CNN model.

# Train the model
# use small epoch for large dataset
# An epoch is 1 run through all the training data
# losses = [] # use this array for plotting losses
      for _ in range(epochs):
    # using data_loader 
         for i, (data, labels) in enumerate(trainloader):
        # Forward and get a prediction
        # x is the training data which is X_train
            if name.lower() == "rnn":
                model.hidden = (torch.zeros(1,1,model.hidden_sz),
                    torch.zeros(1,1,model.hidden_sz))

            y_pred = model.forward(data)

        # compute loss/error by comparing predicted out vs acutal labels
            loss = criterion(y_pred, labels)
        #losses.append(loss)

            if i%10==0:  # print out loss at every 10 epoch
                 print(f'epoch {i} and loss is: {loss}')

        #Backpropagation
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()

study = optuna.create_study(sampler=optuna.samplers.TPESampler(), direction='minimize',pruner=optuna.pruners.SuccessiveHalvingPrune$
study.optimize(autotune, n_trials=1)

BUT , when I run the above code to tune and find out my most optimal parameters , the follow error occured, seems like the trial has failed even though I still get epoch losses and values. Please advise thanks!

[W 2020-11-11 13:59:48,000] Trial 0 failed, because the returned value from the objective function cannot be cast to float. Returned value is: None
Traceback (most recent call last):
  File "autotune2", line 481, in <module>
    n_instances, n_features, scores = run_analysis()
  File "autotune2", line 350, in run_analysis
    print(study.best_params)
  File "/home/shar/anaconda3/lib/python3.7/site-packages/optuna/study.py", line 67, in best_params
    return self.best_trial.params
  File "/home/shar/anaconda3/lib/python3.7/site-packages/optuna/study.py", line 92, in best_trial
    return copy.deepcopy(self._storage.get_best_trial(self._study_id))
  File "/home/shar/anaconda3/lib/python3.7/site-packages/optuna/storages/_in_memory.py", line 287, in get_best_trial
    raise ValueError("No trials are completed yet.")
ValueError: No trials are completed yet.

Upvotes: 4

Views: 3845

Answers (1)

I&#241;igo Gonz&#225;lez
I&#241;igo Gonz&#225;lez

Reputation: 3945

This exception is raised because the objetive function from your study must return a float.

In your case, the problem is in this line:

study.optimize(autotune, n_trials=1)

The autotune function you defined before does not return a value and cannot be used for optimization.

How to fix?

For hyperparameter search, the autotune function must return the either some metric you can get after some training - like the loss or cross-entropy.

A quick fix on your code could be something like this:

def autotune():
  cfg= { 'device' : "cuda" if torch.cuda.is_available() else "cpu"
        ...etc...
       }

  best_loss = 1e100;  # or larger

  # Train the model
  for _ in range(epochs):
     for i, (data, labels) in enumerate(trainloader):
        ... (train the model) ...
        # compute loss/error by comparing predicted out vs actual labels
        loss = criterion(y_pred, labels)
        best_loss = min(loss,best_loss)

  return best_loss

There is a good example with Pythorch in the Optuna repo that uses a pythoch callback to retrieve the accuracy (but can be changed easily to use the RMSE if needed). It also uses more than one experiment and takes the median for hyperparameters.

Upvotes: 4

Related Questions