kl.
kl.

Reputation: 361

creating arrays in nvidia cuda kernel

hi I just wanted to know whether it is possible to do the following inside the nvidia cuda kernel

__global__ void compute(long *c1, long size, ...)
{
  ...
  long d[1000];
  ...
}

or the following

__global__ void compute(long *c1, long size, ...)
{
  ...
  long d[size];
  ...
}

Upvotes: 14

Views: 19121

Answers (4)

Juan Leni
Juan Leni

Reputation: 7578

You can allocate shared memory dynamically when you launch the kernel.

__global__ void compute(long *c1, long size, ...)
 {
  ...
   extern __shared__ float shared[];
  ...
 }

compute <<< dimGrid, dimBlock, sharedMemSize >>>( blah blah );

CUDA programming guide:

the size of the array is determined at launch time (see Section 4.2.3).

Upvotes: 10

adnan ozsoy
adnan ozsoy

Reputation: 69

dynamic memory allocation at kernel runtime is supported, check the sdk example , new delete.

Upvotes: 6

Sebastian
Sebastian

Reputation: 8154

You can do #1, but beware this will be done in EVERY thread!

Your second snippet won't work, because dynamic memory allocation at kernel runtime is not supported.

Upvotes: 11

tkerwin
tkerwin

Reputation: 9759

You can do the first example, I haven't tried the second.

However, if you can help it, you might want to redesign your program not to do this. You do not want to allocate 4000 bytes of memory in your kernel. That will lead to a lot of use of CUDA local memory, since you will not be able to fit everything into registers. CUDA local memory is slow (400 cycles of memory latency).

Upvotes: 12

Related Questions