Reputation: 13652
If (in C++ + CUDA) cudaMallocManaged() is used to allocate a shared array in host and GPU memory, and the program encounters (say in Host code) an exit(1)
, does this leave dangling memory in the GPU permanently?
I am going to guess the answer is NO based on Will exit() or an exception prevent an end-of-scope destructor from being called? but I am not sure whether the GPU has some kind of reclaiming mechanism.
Upvotes: 0
Views: 197
Reputation: 72349
If (in C++ + CUDA)
cudaMallocManaged()
is used to allocate a shared array in host and GPU memory, and the program encounters (say in Host code) an exit(1), does this leave dangling memory in the GPU permanently?
No. The CUDA runtime API registers a teardown function which will release all resources the API claimed at process exit. This operation includes destruction of any active GPU contexts, which frees memory on the GPU. Note that the process actually has to exit for all this to happen (see here for an example of how this can go wrong).
Upvotes: 3