curiousexplorer
curiousexplorer

Reputation: 1237

multi-gpu cuda: Run kernel on one device and modify elements on the other?

Suppose I have multiple GPU's in a machine and I have a kernel running on GPU0.

With the UVA and P2P features of CUDA 4.0, can I modify the contents of an array on another device say GPU1 when the kernel is running on GPU0?

The simpleP2P example in the CUDA 4.0 SDK does not demonstrate this.

It only demonstrates:

Upvotes: 0

Views: 769

Answers (1)

harrism
harrism

Reputation: 27809

Short answer: Yes, you can.

Longer answer

The linked presentation gives full details, but here are the requirements:

  • Must be on a 64-bit OS (either Linux or Windows with the Tesla Compute Cluster driver).
  • GPUs must both be Compute Capability 2.0 (sm_20) or higher.
  • Currently the GPUs must be attached to the same IOH.

You can use cudaDeviceCanAccessPeer() to query whether direct P2P access is possible.

Upvotes: 3

Related Questions