Reputation: 1525
I am trying to train a network on Caffe. I have image size of 512x640. Batch size is 1. I'm trying to implement FCN-8s.
I am currently running this on a Amazon EC2 instance (g2.2xlarge) with 4GB of GPU memory. But when I run the solver, it immediately throws out an error
Check failed: error == cudaSuccess (2 vs. 0) out of memory *** Check failure stack trace: *** Aborted (core dumped)
Can someone help me proceed from here?
Upvotes: 11
Views: 26970
Reputation: 457
I faced the same issue. It got resolved after I force killed the process linked with training -> kill -9 pid. For some reason, the previous train.py process was still running.
Upvotes: 0
Reputation: 1
I was facing a similar issue when running Deeplab v2 on a PC with following configuration:
----------
OS: Ubuntu 18.04.3 LTS (64-bit)
----------
Processor: Intel Core i7-6700k CPU @ 4.00 GHz x 8
----------
GPU: GeForce GTX 780 (3022 MiB)
----------
RAM : 31.3 GiB
----------
Changing both the test and training batch sizes to 1 didn't help me. But, changing the dimensions of the output image sure did!
Upvotes: 0
Reputation: 114786
The error you get is indeed out of memory, but it's not the RAM, but rather GPU memory (note that the error comes from CUDA).
Usually, when caffe is out of memory - the first thing to do is reduce the batch size (at the cost of gradient accuracy), but since you are already at batch size = 1...
Are you sure batch size is 1 for both TRAIN and TEST phases?
Upvotes: 17
Reputation: 5698
Caffe can use multiple GPU's. This is only supported in the C++ interface, not in the python one. You could also enable cuDNN for a lower memory footprint.
https://github.com/BVLC/caffe/blob/master/docs/multigpu.md
Upvotes: 2