chronosynclastic
chronosynclastic

Reputation: 1675

Changing the checkpoint path of lr_find

I want to tune the learning rate for my PyTorch Lightning model. My code runs on a GPU cluster, so I can only write to certain folders that I bind mount. However, trainer.tuner.lr_find tries to write the checkpoint to the folder where my script runs and since this folder is not writable, it fails with the following error:

OSError: [Errno 30] Read-only file system: '/opt/xrPose/.lr_find_43df1c5c-0aed-4205-ac56-2fe4523ca4a7.ckpt'

Is there anyway to change the checkpoint path for lr_find? I checked the documentation but I couldn't find any information on that, in the part related to checkpointing.

My code is below:

res = trainer.tuner.lr_find(model, train_dataloaders=train_dataloader, val_dataloaders=val_dataloader, min_lr=1e-5)
logging.info(f"suggested learning rate: {res.suggestion()}")
model.hparams.learning_rate = res.suggestion()

Upvotes: 2

Views: 810

Answers (2)

As it is defined in the lr_finder.py as:

# Save initial model, that is loaded after learning rate is found
ckpt_path = os.path.join(trainer.default_root_dir, f".lr_find_{uuid.uuid4()}.ckpt")
trainer.save_checkpoint(ckpt_path)

The only way of changing the directory for saving the checkpoint is to change the default_root_dir. But be aware that this is also the directory that the lightning logs are saved to.

You can easily change it with trainer = Trainer(default_root_dir='./NAME_OF_THE_DIR').

Upvotes: 1

u1234x1234
u1234x1234

Reputation: 2510

You may need to specify default_root_dir when initialize Trainer:

trainer = Trainer(default_root_dir='./my_dir')

Description from the Official Documentation:

default_root_dir - Default path for logs and weights when no logger or pytorch_lightning.callbacks.ModelCheckpoint callback passed.

Code example:

import numpy as np
import torch
from pytorch_lightning import LightningModule, Trainer
from torch.utils.data import DataLoader, Dataset


class MyDataset(Dataset):
    def __init__(self) -> None:
        super().__init__()

    def __getitem__(self, index):
        x = np.zeros((10,), np.float32)
        y = np.zeros((1,), np.float32)
        return x, y

    def __len__(self):
        return 100


class MyModel(LightningModule):
    def __init__(self):
        super().__init__()
        self.model = torch.nn.Linear(10, 1)

    def forward(self, x):
        return self.model(x)

    def training_step(self, batch, batch_idx):
        x, y = batch
        y_hat = self(x)
        loss = torch.nn.MSELoss()(y_hat, y)
        return loss

    def configure_optimizers(self):
        return torch.optim.Adam(self.parameters(), lr=0.02)


model = MyModel()
trainer = Trainer(default_root_dir='./my_dir')
train_dataloader = DataLoader(MyDataset())
trainer.tuner.lr_find(model, train_dataloader)

Upvotes: 3

Related Questions