kar
kar

Reputation: 2795

How to free __device__ memory in CUDA

__device__ int data; 

__constant__ int var1;

How to free the "data" and "var1" in the CUDA?

Thank you

Upvotes: 1

Views: 3651

Answers (3)

user1401491
user1401491

Reputation: 439

With Device compute capability of sm_20 and above you can simply use new or delete keyword, even better would be to use CUDA thrust API ( it an implementation of standard template library on top of GPU) really cool stuff .

http://code.google.com/p/thrust/

Upvotes: 3

M. Tibbits
M. Tibbits

Reputation: 8630

As @ CygnusX1 said, you can't free it. As you have declared it, the memory will be allocated for the life of your program -- NOTE: Even if you never call the kernel.

You can however use cudaMalloc, and cudaFree (or new/delete within in CUDA 4.0) to allocate and free memory temporarily. Of course you must manipulate everything with pointers, but this is a huge savings if you need to store several large objects, free them, and then store several more large objects...

Upvotes: 2

CygnusX1
CygnusX1

Reputation: 21779

You can't free it. It gets automatically freed when the program ends.

Similarly, as in host code you don't free global variables.

Upvotes: 2

Related Questions