Kilian Hersent
Kilian Hersent

Reputation: 103

How to get the full Jacobian of a derivative in PyTorch?

Lets consider a simple tensor x and lets define another one which depends on x and have multiple dimension : y = (x, 2x, x^2).

How can I have the full gradient dy/dx = (1,2,x) ?

For example lets take the code :

import torch
from torch.autograd import grad

x = 2 * torch.ones(1)
x.requires_grad = True
y = torch.cat((x, 2*x, x*x))
# dy_dx = ???

This is what I have I unsuccessfuly tried so far :

>>> dy_dx = grad(y, x, grad_outputs=torch.ones_like(y), create_graph=True)
(tensor([7.], grad_fn=<AddBackward0>),)
>>> dy_dx = grad(y, x, grad_outputs=torch.Tensor([1,0,0]), create_graph=True)
(tensor([1.], grad_fn=<AddBackward0>),)
>>> dy_dx = grad(y, [x,x,x], grad_outputs=torch.eye(3), create_graph=True)
(tensor([7.], grad_fn=<AddBackward0>),)

Each times I got only part of the gradient or an accumulated version...

I know I could use a for loop using the second expression like

dy_dx = torch.zeros_like(y)
coord = torch.zeros_like(y)
for i in range (y.size(0)):
    coord[i] = 1
    dy_dx[i], = grad(y, x, grad_outputs=coord, create_graph=True)
    coord[i] = 0

However, as I am handeling with high dimensions tensors, this for loop could take too much time to compute. Moreover, there must be a way to perform the full jacobian without acuumulating the gradient...

Does anyone has the solution ? Or an alternative ?

Upvotes: 5

Views: 1973

Answers (1)

alxyok
alxyok

Reputation: 186

torch.autograd.grad in PyTorch is aggregated. To have a vector auto-differentiated with respect to the input, use torch.autograd.functional.jacobian.

Upvotes: 3

Related Questions