Reputation: 1319
Calling tensor.numpy()
gives the error:
RuntimeError: Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead.
tensor.cpu().detach().numpy()
gives the same error.
Upvotes: 57
Views: 128703
Reputation: 1780
Best solution is to use torch.no_grad():
the context manager which disables the tracking of the gradient locally.
Just write your code inside this contact manager like:
with torch.no_grad():
graph_x = some_list_of_numbers
graph_y = some_list_of_tensors
plt.plot(graph_x, graph_y)
plt.show()
Upvotes: 4
Reputation: 91
I just ran into this problem when running through epochs, I recorded the loss to a list
final_losses.append(loss)
Once I ran through all the epochs I wanted to graph the output
plt.plot(range(epochs), final_loss)
plt.ylabel('RMSE Loss')
plt.xlabel('Epoch');
I was running this on my Mac, with no problem, but, I needed to run this on a windows PC and it produced the error noted above. So, I checked the type of each variable.
Type(range(epochs)), type(final_losses)
range, list
Seems like it should be OK.
It took a little bit of fidgeting to realize that the final_losses list was a list of tensors. I then converted them to an actual list with a new list variable fi_los.
fi_los = [fl.item() for fl in final_losses ]
plt.plot(range(epochs), fi_los)
plt.ylabel('RMSE Loss')
plt.xlabel('Epoch');
Success!
Upvotes: 7
Reputation: 496
I had the same error message but it was for drawing a scatter plot on matplotlib.
There is 2 steps I could get out of this error message :
import the fastai.basics
library with : from fastai.basics import *
If you only use the torch
library, remember to take off the requires_grad
with :
with torch.no_grad():
(your code)
Upvotes: 31
Reputation: 627
from torch.autograd import Variable
type(y) # <class 'torch.Tensor'>
y = Variable(y, requires_grad=True)
y = y.detach().numpy()
type(y) #<class 'numpy.ndarray'>
Upvotes: 3
Reputation: 1039
import torch
tensor1 = torch.tensor([1.0,2.0],requires_grad=True)
print(tensor1)
print(type(tensor1))
tensor1 = tensor1.numpy()
print(tensor1)
print(type(tensor1))
which leads to the exact same error for the line tensor1 = tensor1.numpy()
:
tensor([1., 2.], requires_grad=True)
<class 'torch.Tensor'>
Traceback (most recent call last):
File "/home/badScript.py", line 8, in <module>
tensor1 = tensor1.numpy()
RuntimeError: Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead.
Process finished with exit code 1
this was suggested to you in your error message, just replace var
with your variable name
import torch
tensor1 = torch.tensor([1.0,2.0],requires_grad=True)
print(tensor1)
print(type(tensor1))
tensor1 = tensor1.detach().numpy()
print(tensor1)
print(type(tensor1))
which returns as expected
tensor([1., 2.], requires_grad=True)
<class 'torch.Tensor'>
[1. 2.]
<class 'numpy.ndarray'>
Process finished with exit code 0
You need to convert your tensor to another tensor that isn't requiring a gradient in addition to its actual value definition. This other tensor can be converted to a numpy array. Cf. this discuss.pytorch post. (I think, more precisely, that one needs to do that in order to get the actual tensor out of its pytorch Variable
wrapper, cf. this other discuss.pytorch post).
Upvotes: 55