linda
linda

Reputation: 73

Dynamic Allocating memory on GPU

Is it possible to dynamically allocate memory on a GPU's Global memory inside the Kernel?
i don't know how big will my answer be, therefore i need a way to allocate memory for each part of the answer. CUDA 4.0 alloww us to use the RAM... is it a good idea or will it reduce the speed??

Upvotes: 5

Views: 5185

Answers (2)

kokosing
kokosing

Reputation: 5601

From CUDA 4.0 you will be able to use new and delete operators from c++ instead of malloc and free from c.

Upvotes: 1

scatman
scatman

Reputation: 14555

it is possible to use malloc inside a kernel. check the following which is taken from nvidia cuda guide:

__global__ void mallocTest() 
{ 
  char* ptr = (char*)malloc(123); 
  printf(“Thread %d got pointer: %p\n”, threadIdx.x, ptr); 
  free(ptr); 
} 
void main() 
{ 
  cudaThreadSetLimit(cudaLimitMallocHeapSize, 128*1024*1024); 
  mallocTest<<<1, 5>>>(); 
  cudaThreadSynchronize(); 
} 

will output: 
Thread 0 got pointer: 00057020 
Thread 1 got pointer: 0005708c 
Thread 2 got pointer: 000570f8 
Thread 3 got pointer: 00057164 

Upvotes: 11

Related Questions