Wickkiey
Wickkiey

Reputation: 4632

PyTroch, Gradient calculations

https://colab.research.google.com/github/pytorch/tutorials/blob/gh-pages/_downloads/neural_networks_tutorial.ipynb

Hi I am trying to understand the NN with pytorch. I have doubts in gradient calculations..

import torch.optim as optim

create your optimizer
optimizer = optim.SGD(net.parameters(), lr=0.01)


```
# in your training loop:
optimizer.zero_grad()   # zero the gradient buffers
output = net(input)
loss = criterion(output, target)
loss.backward()
optimizer.step()    # Does the update
```

From the about code, I understood loss.backward() calculates the gradients. I am not sure, how these info shared with optimizer to update the gradient.

Can anyone explain this..

Thanks in advance !

Upvotes: 0

Views: 84

Answers (1)

prosti
prosti

Reputation: 46341

When you created the optimizer in this line

optimizer = optim.SGD(net.parameters(), lr=0.01)

You provided net.parameters() with all learnable parameters that will be updated, based on gradients.

The model and the optimizer are connected only because they share the same parameters.

PyTorch parameters are tensors. They are not called variables anymore.

Upvotes: 2

Related Questions