Reputation: 11776
Is there any way to use RAM after GPU memory(NVIDIA) is completely used up in CUDA?
What I have thought up to now is:
But obiviously this will need alot of syncronization things.
Thank you!
Upvotes: 3
Views: 1342
Reputation: 5482
If the memory on the GPU is not enough you can use the host memory quite easily. What you are looking for is zero-copy memory allocated with cudaHostAlloc
. Here is the example from the best-practice guide:
float *a_h, *a_map;
...
cudaGetDeviceProperties(&prop, 0);
if (!prop.canMapHostMemory)
exit(0);
cudaSetDeviceFlags(cudaDeviceMapHost);
cudaHostAlloc(&a_h, nBytes, cudaHostAllocMapped);
cudaHostGetDevicePointer(&a_map, a_h, 0);
kernel<<<gridSize, blockSize>>>(a_map);
However, the performance will be limited by the PCIe bandwitdh (around 6GB/s).
Here is the documentation in the best-practice guide: Zero-Copy
Upvotes: 6