Reputation: 21
I have an algorithm and I've been trying to accelerate it using OpenCL on my nVidia.
It has to process a large amount of data (let's say 100k to milions), where for each one datum: a matrix (on the device) has to be updated first (using the datum and two vectors); and only after the whole matrix has been updated, the two vectors (also on the device) are updated using the same datum. So, my host code looks something like this
for (int i = 0; i < milions; i++) {
clSetKernelArg(kernel_matrixUpdate, 7, sizeof(int), (void *)&i);
clSetKernelArg(kernel_vectorsUpdate, 4, sizeof(int), (void *)&i);
clEnqueueNDRangeKernel(command_queue, kernel_matrixUpdate, 1, NULL, &global_item_size_Matrix, NULL, 0, NULL, NULL);
clEnqueueNDRangeKernel(command_queue, kernel_vectorsUpdate, 1, NULL, &global_item_size_Vectors, NULL, 0, NULL, NULL);}
Unfortunately, this loop takes longer to execute than the kernels themselves. So my questions are:
Every feedback or opinion will be appreciated. Thank you.
Upvotes: 2
Views: 274
Reputation: 1
When I enqueued many (say 1000) kernels I noticed that enqueue operation took more and more time. Adding clFinish(queue)
time after time gave ~15% increase of overall speed.
Upvotes: 0
Reputation: 763
You need to upload all your data to GPU and then call a kernel with one work item per element, instead of the for loop.
Generally, when going from CPU to GPU, the outermost "for" loop becomes a kernel invocation.
Upvotes: 0