Leockl
Leockl

Reputation: 2166

Can numpy arrays run in GPUs?

I am using PyTorch. I have the following code:

import numpy as np
import torch

X = np.array([[1, 3, 2, 3], [2, 3, 5, 6], [1, 2, 3, 4]])
X = torch.DoubleTensor(X).cuda()

X_split = np.array_split(X.numpy(), 
                         indices_or_sections = 2, 
                         axis = 0)
X_split

but I am getting this error:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-121-870b5d3f67b6> in <module>()
----> 1 X_prime_class_split = np.array_split(X_prime_class.numpy(), 
      2                                      indices_or_sections = 2,
      3                                      axis = 0)
      4 X_prime_class_split

TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

The error message is clear and I know how to fix this error by just including .cpu(), ie. X_prime_class.cpu().numpy(). I am just curious to know if this confirms that numpy arrays cannot run in GPUs/Cuda?

Upvotes: 5

Views: 12658

Answers (1)

jodag
jodag

Reputation: 22274

No you cannot generally run numpy functions on GPU arrays. PyTorch reimplements much of the functionality in numpy for PyTorch tensors. For example torch.chunk works similarly to np.array_split so you could do the following:

X = np.array([[1, 3, 2, 3], [2, 3, 5, 6], [1, 2, 3, 4]])
X = torch.DoubleTensor(X).cuda()
X_split = torch.chunk(X, chunks=2, dim=0)

which splits X into multiple tensors without ever moving the X off the GPU.

Upvotes: 5

Related Questions