Anil
Anil

Reputation: 1334

Utility of wrapping tensor in Variable with requires_grad=False in legacy PyTorch

I'm using a codebase that was written in 2017/18 and I found the following code:

audio_norm = audio_norm.unsqueeze(0)
audio_norm = torch.autograd.Variable(audio_norm, requires_grad=False)

I am aware that wrapping tensors in Variable formerly allowed their gradients to be incorporated into the computation graph Torch builds for previous versions of Torch (now no longer needed) but I'm confused what the utility of wrapping a tensor in torch.autograd.Variable(my_tensor, requires_grad=False) would be.

Could someone explain if this was an idiom and what the analogous modern Torch code would be? My guess would be calling detach on the tensor to stop its gradients being tracked.

For reference, the relevant line from the codebase is line 45 from the data_utils.py script of NVIDIA's Tacotron 2 implementation. Thanks.

Upvotes: 1

Views: 881

Answers (2)

Viacheslav Ivannikov
Viacheslav Ivannikov

Reputation: 732

you are looking for

audio_norm = audio_norm.unsqueeze(0)
audio_norm = torch.tensor(audio_norm)

if you need it to require grad then

audio_norm = torch.tensor(audio_norm, require_grad=True)

Upvotes: 1

jodag
jodag

Reputation: 22244

In PyTorch 0.3.1 and earlier, any tensor involved in a computation that needed to be tracked by autograd had to be wrapped in a Variable. Semantically Variable.requires_grad in PyTorch 0.3.1 and earlier is equivalent to Tensor.requires_grad now. Basically, requires_grad=False simply tells autograd that you will never need the gradient w.r.t. that variable/tensor. Mathematical operations are only ever recorded (i.e. a computation graph is constructed) if at least one input variable/tensor has requires_grad=True.

Note that any code using PyTorch newer than 0.3.1 does not actually require the use of Variable, this includes the code in the repository you provided (which explicitly requires PyTorch >= 1.0). In 0.4 the functionality of Variable was merged into the Tensor class. In modern PyTorch, you simply have to set the requires_grad attribute of the tensor to achieve the same behavior. By default, a new user-defined tensor is already constructed with requires_grad=False, so the modern equivalent of the code you posted is usually to just delete the Variable line. If you aren't sure if the tensor already has requires_grad == False then you could explicitly set it.

audio_norm = audio_norm.unsqueeze(0)
audio_norm.requires_grad_(False)

You can read the legacy documentation here for more information.

Upvotes: 1

Related Questions