Reputation: 3581
I have two NVIDIA cards on my machine. I want to execute a CUDA kernel on one of them (for example, on the second). Alas, in the tutorials I didn't find the device selection for memory allocation and kernel execution, like it is done for OpenCL.
Cannot you tell me, how can I choose the video device to execute kernels and allocate memory on?
Upvotes: 0
Views: 268
Reputation: 628
This is probably what you are looking for:
cudaError_t cudaSetDevice (int device)
Link to NVIDIA API documentation:
Quote from the above link:
Any device memory subsequently allocated from this host thread using cudaMalloc(), cudaMallocPitch() or cudaMallocArray() will be physically resident on device. Any host memory allocated from this host thread using cudaMallocHost() or cudaHostAlloc() or cudaHostRegister() will have its lifetime associated with device. Any streams or events created from this host thread will be associated with device. Any kernels launched from this host thread using the <<<>>> operator or cudaLaunch() will be executed on device.
Upvotes: 1