Abdalrahman Hesham
Abdalrahman Hesham

Reputation: 29

pytorch "trying to backward through the graph a second time" error with chracter level RNN

I am training a character level GRU with pytorch, while dividing the text into batches of a certain chunk length. This is the trainning loop:

for e in range(self.epochs):
  self.model.train()
  h = self.get_init_state(self.batch_size)
  
  for batch_num in range(self.num_batch_runs):
    batch = self.generate_batch(batch_num).to(device)
    
    inp_batch = batch[:-1,:]
    tar_batch = batch[1:,:]
    
    
    self.model.zero_grad()
    loss = 0

    for i in range(inp_batch.shape[0]):
      out, h = self.model(inp_batch[i:i+1,:],h)

      loss += loss_fn(out[0],tar_batch[i].view(-1))
      
    
    loss.backward()

    nn.utils.clip_grad_norm_(self.model.parameters(), 5.0)

    optimizer.step()
    

    if not (batch_num % 5):
      print("epoch: {}, loss: {}".format(e,loss.data.item()/inp_batch.shape[0]))

Still, I am getting this error after the first batch:

Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.

Thanks in advance..

Upvotes: 0

Views: 268

Answers (1)

Abdalrahman Hesham
Abdalrahman Hesham

Reputation: 29

I found the answer myself, the hidden state of the GRU was still attached to the last batch run, so it had to be detached using

h.detach_()

Upvotes: 1

Related Questions