biao_biao
biao_biao

Reputation: 311

In Colaboratory, CUDA cannot be used for Pytorch

The error message is as follows:

RuntimeError   Traceback (most recent call last)
<ipython-input-24-06e96beb03a5> in <module>()
     11
     12 x_test = np.array(test_features)
---> 13 x_test_cuda = torch.tensor(x_test, dtype=torch.float).cuda()
     14 test = torch.utils.data.TensorDataset(x_test_cuda)
     15 test_loader = torch.utils.data.DataLoader(test, batch_size=batch_size, shuffle=False)

/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py in _lazy_init()
    160 class CudaError(RuntimeError):
    161     def __init__(self, code):
--> 162         msg = cudart().cudaGetErrorString(code).decode('utf-8')
    163         super(CudaError, self).__init__('{0} ({1})'.format(msg, code))
    164

RuntimeError: cuda runtime error (38) : no CUDA-capable device is detected at /pytorch/aten/src/THC/THCGeneral.cpp:51

Upvotes: 17

Views: 28255

Answers (2)

SenthurLP
SenthurLP

Reputation: 172

Sometimes even after doing the below stop, this error may occur. The reason behind that is that you might be using Colab for long computational processes and it doesn't prefer you using that way.

You can check out the reasons and Google's explanations in this link: https://research.google.com/colaboratory/faq.html#usage-limits

Upvotes: -2

BlankSpace
BlankSpace

Reputation: 681

Click on Runtime and select Change runtime type.

Now in Hardware Acceleration, select GPU and hit Save.

Upvotes: 62

Related Questions