Daremitsu
Daremitsu

Reputation: 643

TypeError: setup() got an unexpected keyword argument 'stage'

I am trying to train my q&a model through pytorch_lightning. However while running the command trainer.fit(model,data_module) I am getting the following error:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-72-b9cdaa88efa7> in <module>()
----> 1 trainer.fit(model,data_module)

4 frames
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py in _call_setup_hook(self)
   1488 
   1489         if self.datamodule is not None:
-> 1490             self.datamodule.setup(stage=fn)
   1491         self._call_callback_hooks("setup", stage=fn)
   1492         self._call_lightning_module_hook("setup", stage=fn)

TypeError: setup() got an unexpected keyword argument 'stage'

I have installed and imported pytorch_lightning.

Also I have defined data_module = BioQADataModule(train_df, val_df, tokenizer, batch_size = BATCH_SIZE) where BATCH_SIZE = 2, N_EPOCHS = 6.

The model I have used is as follows:-

model = T5ForConditionalGeneration.from_pretrained(MODEL_NAME, return_dict=True)

Also, I have defined the class for the model as follows:-

    class BioQAModel(pl.LightningModule):
    
      def __init__(self):
        super().__init__()
        self.model = T5ForConditionalGeneration.from_pretrained(MODEL_NAME, return_dict=True)
    
      def forward(self, input_ids, attention_mask, labels=None):
        output = self.model(
            input_ids = encoding["input_ids"],
            attention_mask = encoding["attention_mask"],
            labels=labels
        )
    
        return output.loss, output.logits
    
      def training_step(self, batch, batch_idx):
        input_ids = batch["input_ids"]
        attention_mask = batch["attention_mask"]
        labels = batch["labels"]
        loss, outputs = self(input_ids, attention_mask, labels)
        self.log("train_loss", loss, prog_bar=True, logger=True)
        return loss
    
      def validation_step(self, batch, batch_idx):
        input_ids = batch["input_ids"]
        attention_mask = batch["attention_mask"]
        labels = batch["labels"]
        loss, outputs = self(input_ids, attention_mask, labels)
        self.log("val_loss", loss, prog_bar=True, logger=True)
        return loss 
    
      def test_step(self, batch, batch_idx):
        input_ids = batch["input_ids"]
        attention_mask = batch["attention_mask"]
        labels = batch["labels"]
        loss, outputs = self(input_ids, attention_mask, labels)
        self.log("test_loss", loss, prog_bar=True, logger=True)
        return loss
    
      def configure_optimizers(self):
        return AdamW(self.parameters(), lr=0.0001)

For any additional information required, please specify.

Edit 1: Adding BioQADataModule:

class BioQADataModule(pl.LightningDataModule):

  def __init__(
      self,
      train_df: pd.DataFrame,
      test_df: pd.DataFrame,
      tokenizer: T5Tokenizer,
      batch_size: int = 8,
      source_max_token_len = 396,
      target_max_token_len = 32
    ):
      super().__init__()
      self.batch_size = batch_size
      self.train_df = train_df
      self.test_df = test_df
      self.tokenizer = tokenizer
      self.source_max_token_len = source_max_token_len
      self.target_max_token_len = target_max_token_len

  def setup(self):
    self.train_dataset = BioQADataset(
        self.train_df,
        self.tokenizer,
        self.source_max_token_len,
        self.target_max_token_len
    )

    self.test_dataset = BioQADataset(
        self.test_df,
        self.tokenizer,
        self.source_max_token_len,
        self.target_max_token_len
    )

  def train_dataloader(self):
    return DataLoader(
        self.train_dataset,
        batch_size = self.batch_size,
        shuffle = True,
        num_workers = 4
    )

  def val_dataloader(self):
    return DataLoader(
        self.train_dataset,
        batch_size = 1,
        shuffle = True,
        num_workers = 4  
    )

  def test_dataloader(self):
    return DataLoader(
        self.train_dataset,
        batch_size = 1,
        shuffle = True,
        num_workers = 4  
    )

Upvotes: 4

Views: 4860

Answers (1)

Aramakus
Aramakus

Reputation: 1920

You need to add an extra argument stage=None to your setup method:

def setup(self, stage=None):
    self.train_dataset = BioQADataset(
        self.train_df,
        self.tokenizer,
        self.source_max_token_len,
        self.target_max_token_len
    )

    self.test_dataset = BioQADataset(
        self.test_df,
        self.tokenizer,
        self.source_max_token_len,
        self.target_max_token_len
    )

I've played with Pytorch Lightning myself for multi-GPU training here. Although some of the code is a bit outdated (metrics are a standalone module now), you might find it useful.

Upvotes: 10

Related Questions