Kevin
Kevin

Reputation: 3348

Log metrics with configuation in Pytorch Lightning using w&b

I am using PyTorch Lightning together with w&b and trying associate metrics with a finite set of configurations. In the LightningModule class I have defined the test_step as:

def test_step(self, batch, batch_idx):
  x, y_true, config_file =  batch
  y_pred = self.forward(x)
  accuracy = self.accuracy(y_pred, y_true)
  self.log("test/accuracy", accuracy)

Assuming (for simplicity) that the batch size is 1, this will log the accuracy for 1 sample and it will be displayed as a chart in the w&b dashboard.

I would like to associate this accuracy with some configuration of the experimental environment. This configuration might include BDP factor, bandwith delay, queue_size, location, etc. I don't want to plot the configurations I just want to be able to filter or group the accuracy by some configuration value.

The only solution I can come up with is to add these configurations as a querystring:

def test_step(self, batch, batch_idx):
  x, y_true, config_file =  batch
  # read values in config file
  # ...

  y_pred = self.forward(x)
  accuracy = self.accuracy(y_pred, y_true)
  self.log("test/BDP=2&delay=10ms&queue_size=10&topology=single/accuracy", accuracy)

Is there a better solution for this that integrates my desired functionality of being able to group and filter by values like BDP?

Upvotes: 2

Views: 653

Answers (1)

morganmcg
morganmcg

Reputation: 553

I work at W&B. You could log your config variables using wandb.config, like so:

wandb.config['my_variable'] = 123

And then you'll be able to filter your charts by whatever config you'd logged. Or am I missing something.

Possibly the save_hyperparameters call might even grab these config values automatically (from the WandbLogger docs here)

class LitModule(LightningModule):
    def __init__(self, *args, **kwarg):
        self.save_hyperparameters()

Upvotes: 1

Related Questions