Reputation: 171
Assume a system with two distinct GPUs, but from the same vendor so they can be accessed from a single OpenCL Platform. Given the following simplified OpenCL code:
float* someRawData;
cl_device_id gpu1 = clGetDeviceIDs(0,...);
cl_device_id gpu2 = clGetDeviceIDs(1,...);
cl_context ctx = clCreateContext(gpu1,gpu2,...);
cl_command_queue queue1 = clCreateCommandQueue(ctx,gpu1,...);
cl_command_queue queue2 = clCreateCommandQueue(ctx,gpu2,...);
cl_mem gpuMem = clCreateBuffer(ctx, CL_MEM_READ_WRITE, ...);
clEnqueueWriteBuffer(queue1,gpuMem,...,someRawData,...);
clFinish(queue1);
At the end of the execution, will someRawData
be on both GPU in-memory or will it be only on gpu1
in-memory?
Upvotes: 1
Views: 942
Reputation: 2181
It is up to the implementation, where the data will be after calling clFinish()
but most likely it will be on the GPU referenced by the queue. Also, this kind of abstraction makes it possible to access gpuMem
from a kernel launched on queue2
without an explicit data transfer.
Upvotes: 1