Reputation: 1001
I got the following error when I ran my PyTorch deep learning model in Google Colab
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in linear(input, weight, bias)
1370 ret = torch.addmm(bias, input, weight.t())
1371 else:
-> 1372 output = input.matmul(weight.t())
1373 if bias is not None:
1374 output += bias
RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`
I even reduced batch size from 128 to 64 i.e., reduced to half, but still, I got this error. Earlier, I ran the same code with a batch size of 128 but didn't get any error like this.
Upvotes: 52
Views: 138452
Reputation: 45
I solved this problem to upgrading Gpu. For my case I upgraded T4 --> L4.
Upvotes: -1
Reputation: 73
I was instantiating the AutoModelForSequenceClassification
class as follows:
model = AutoModelForSequenceClassification.from_pretrained(model_name)
and I ended up having this problem. I corrected it when I declared the number of labels in my dataset:
model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=train_df['label'].nunique())
Upvotes: 0
Reputation: 499
No, batch size does not matter in this case.
The most likely reason is that there is an inconsistency between number of labels and number of output units.
print(model.fc1(x).size())
Herefc1
would be replaced by the name of your model's last linear layer before returning
label.size()
is equal to prediction.size()
before calculating the lossAnd even after fixing that problem, you'll have to restart the GPU runtime (I needed to do this in my case when using a Colab GPU)
This GitHub issue comment might also be helpful.
Upvotes: 35
Reputation: 5467
Reducing the maximum sequence length
for a model that has a limit (e.g. BERT) solved this error for me.
Also, I faced the same issue when I resized the embedding layer of a model: model.resize_token_embeddings(NEW_SIZE)
, trained, and saved it.
At prediction time, when I loaded the model, I needed to resize the embedding layer again!
Upvotes: 2
Reputation: 121
One cause of this problem may be when the number of label is not equal to the number of network output channels, i.e the number of output classes predicted. Adjust the output to match and it should fix the issue.
Upvotes: 6
Reputation: 4507
This error means "Resource allocation failed inside the cuBLAS library".
Decreasing the batch size solved the issue for me. You said you increased to 64 and it didn't help. Try 32, 8, 1, etc. as well.
Also, try running the same on your CPU to check if everything is fine with your tensors' shapes.
Upvotes: 8
Reputation: 1
For a large-scale dataset, just delete the temple variables
for batch_idx, (x, target) in enumerate(train_dataloader):
...
del x,target,loss,outputs
Upvotes: 0
Reputation: 19
I had the same problem while I don't know the reason to be exactly I know the cause, my last line of the NN.module was
self.fc3 = nn.Linear(84, num_classes)
I changed my real num_classes to be 2 times as much but it did not change the value of the variable num_classes, this probably made a mistake when I was outputting the results somewhere.
after I fixed the value of num_classes it just worked out i recommend going over the numbers in your model again
Upvotes: 1
Reputation: 1
My model is to classify two classes with only one neuron in the last layer. I had this problem when the last layer is nn.Linear(512,1) in pytorch environment. But my label is just [0] or [1]. I solved this problem by adding the layer: nn.sigmoid()
Upvotes: 0
Reputation: 163
Reducing batch size works for me and the training proceeds as planned.
Upvotes: 13
Reputation: 661
This error can actually be due to different reasons. It is recommended to debug CUDA errors by running the code on the CPU, if possible. If that’s not possible, try to execute the script via:
CUDA_LAUNCH_BLOCKING=1 python [YOUR_PROGRAM]
This will help you get the right line of code which raised the error in the stack trace so that you can resolve it.
Upvotes: 25