Reputation: 28054
How do I convert a torch tensor to numpy?
Upvotes: 27
Views: 79369
Reputation: 1089
You can use the force=True
parameter from torch.Tensor.numpy:
import torch
t = torch.rand(3, 2, device='cuda:0')
print(t.numpy(force=True))
t.numpy(force=True)
is a shorthand to:
t.detach().cpu().resolve_conj().resolve_neg().numpy()
The force
parameter was introduced in PyTorch 1.13.
Upvotes: 1
Reputation: 305
x = torch.tensor([0.1,0.32], device='cuda:0')
x.detach().cpu().data.numpy()
Upvotes: -1
Reputation: 28054
copied from pytorch doc:
a = torch.ones(5)
print(a)
tensor([1., 1., 1., 1., 1.])
b = a.numpy()
print(b)
[1. 1. 1. 1. 1.]
Following from the below discussion with @John:
In case the tensor is (or can be) on GPU, or in case it (or it can) require grad, one can use
t.detach().cpu().numpy()
I recommend to uglify your code only as much as required.
Upvotes: 44
Reputation: 1109
Another useful way :
a = torch(0.1, device='cuda')
a.cpu().data.numpy()
Answer
array(0.1, dtype=float32)
Upvotes: 4
Reputation: 46469
This is a function from fastai core:
def to_np(x):
"Convert a tensor to a numpy array."
return apply(lambda o: o.data.cpu().numpy(), x)
Possible using a function from prospective PyTorch library is a nice choice.
If you look inside PyTorch Transformers you will find this code:
preds = logits.detach().cpu().numpy()
So you may ask why the detach()
method is needed? It is needed when we would like to detach the tensor from AD computational graph.
Still note that the CPU tensor and numpy array are connected. They share the same storage:
import torch
tensor = torch.zeros(2)
numpy_array = tensor.numpy()
print('Before edit:')
print(tensor)
print(numpy_array)
tensor[0] = 10
print()
print('After edit:')
print('Tensor:', tensor)
print('Numpy array:', numpy_array)
Output:
Before edit:
tensor([0., 0.])
[0. 0.]
After edit:
Tensor: tensor([10., 0.])
Numpy array: [10. 0.]
The value of the first element is shared by the tensor and the numpy array. Changing it to 10 in the tensor changed it in the numpy array as well.
This is why we need to be careful, since altering the numpy array my alter the CPU tensor as well.
Upvotes: 4
Reputation: 2311
You can try following ways
1. torch.Tensor().numpy()
2. torch.Tensor().cpu().data.numpy()
3. torch.Tensor().cpu().detach().numpy()
Upvotes: 11
Reputation: 392
Sometimes if there's "applied" gradient, you'll first have to put .detach()
function before the .numpy()
function.
loss = loss_fn(preds, labels)
print(loss.detach().numpy())
Upvotes: 1