Reputation: 5544
I have a question :
Let's say I have 2 GPU:s in my system and I have 2 host processes running cuda code. How can I be sure that each takes a GPU?
I'm considering setting exclusive_thread but I cannot understand how to get advantage of it: once I check that a device is free how can I be sure that it remains free until I do a cudaSetDevice?
EDIT:
So far I've tried this:
int devN = 0;
while (cudaSuccess != cudaSetDevice(devN))devN = (devN + 1) % 2;
but I get a
CUDA Runtime API error 77: an illegal memory access was encountered.
which is not strange since I am in EXCLUSIVE_PROCESS mode.
Upvotes: 1
Views: 2164
Reputation: 2916
Two elements within this question. Assigning a process to a GPU and making sure a GPU is available for a single process.
There is a simple way to accomplish this using CUDA_VISIBLE_DEVICES environment variable: start you first process with CUDA_VISIBLE_DEVICES=0
and your second process with CUDA_VISIBLE_DEVICES=1
. Each process will see a single GPU, with device index 0, and will see a different GPU.
Running nvidia-smi topo -m
will display GPU topology and provide you with the corresponding CPU affinity.
Then, you may set CPU affinity for your process with taskset
or numactl
on linux or SetProcessAffinityMask
on Windows.
To make sure that no other process may access your GPU, configure the GPU driver to be in exclusive process: nvidia-smi --compute-mode=1
.
Upvotes: 2