Reputation: 1
I'm writing a C
program using CUDA
parallelization, and I was wondering if it is possible for a kernel to return a break
to CPU
.
My program essentially do a for loop and inside that loop I take several parallel actions; at the start of each iteration I have to take a control over a variable (measuring the improvement of the just done iteration) which resides on the GPU
.
My desire is that the control over that variable returns a break to CPU
in order to exit the for loop(I take the control using a trivial kernel <<<1,1>>>
).
I've tried copying back that variable on the CPU and take the control on the CPU but, as I feared, it sloows down the overall execution.
Any advice?
Upvotes: 0
Views: 1057
Reputation: 886
There is no connection betweeen CPU code and GPU code. All what you can do while working with CUDA is:
So thinking about these steps in a loop, all what is left to you is to check result and break the loop if need to.
Upvotes: 0
Reputation: 72349
There is presently no way for any running code on a CUDA capable GPU to preempt running code on the host CPU. So although it isn't at all obvious what you are asking about, I'm fairly certain the answer is no just because there is no host side preempt or interrupt mechanism available available in device code.
Upvotes: 2