Manish
Manish

Reputation: 1769

Using CUDA Occupancy Calculator


I am using occupancy calculator but I cannot understand how to get the Registers per thread / shared memory per block .I read the documentation.I use visual studio .So in the project properties under CUDA build rule->Command Line -> Additional Options I add --ptxas-options=-v.The program compiles fine .But i do not see any output .Can anybody help? Thanks

Upvotes: 5

Views: 4505

Answers (4)

tpm1510
tpm1510

Reputation: 361

shoosh's answer is probably the easiest way to find the register and shared memory usage. Make sure you're looking at the output pane first (select "Output" in the "View" pull-down menu) and then re-compile. The compiler should give you ptxas info for all kernels in the output pane, like you can see in the picture below...

View of ptxas compiler output in VS output window

Upvotes: 3

jwdmsd
jwdmsd

Reputation: 2127

Try this simple rule:

All the local variables like int a, float b etc in your kernel are stored in registers. This is only when local variables in your code remains within the limit of available registers in a multiprocessor, See Limits. However, if you declare a thousand integers like int a[1000] then a will not be stored in registers rather it will be stored in local memory (DRAM).

The amount of shared memory used in your kernel code is Shared Memory/Block. For example if you define __shared__ float shMem[256] then you are using 256*4(size of float) = 1024 bytes of shared memory.

The following sample code (it'll not work properly, just for example) uses 9 32-bit-registers per thread which are: int xIndex, yIndex, Idx, shY, shX, aLocX, aLocY and float t, temp. The code uses 324 bytes of shared memory per block, as BLOCK_DIM = 16.

__global__ void averageFilter (unsigned char * outImage,
                           int imageWidth,
                           int imageHeight,
                           cuviPoint2 loc){


    unsigned int xIndex = blockIdx.x * BLOCK_DIM + threadIdx.x;
    unsigned int yIndex = blockIdx.y * BLOCK_DIM + threadIdx.y;
    unsigned int Idx = yIndex*imageWidth + xIndex;
    float t = INC;


    if(xIndex>= imageWidth|| yIndex>=imageHeight)
        return;


    else if(xIndex==0 || xIndex== imageWidth-1 || yIndex==0 || yIndex==imageHeight-1){

          for (int i=-1; i<=1; i++)
             for (int j=-1; j<=1; j++)
                 t+= tex1Dfetch(texMem,Idx+i*imageWidth+j);
                    outImage[Idx] = t/6;

          }


    __shared__ unsigned char shMem[BLOCK_DIM+2][BLOCK_DIM+2];


    unsigned int shY = threadIdx.y + 1;
    unsigned int shX = threadIdx.x + 1;


   if (threadIdx.x==0 || threadIdx.x==BLOCK_DIM-1 || threadIdx.y==0 || threadIdx.y==BLOCK_DIM-1){


 for (int i=-1; i<=1; i++)
      for (int j=-1; j<=1; j++)
        shMem[shY+i][shX+j]=  tex1Dfetch(texMem,Idx+i*imageWidth+j);

    }
    else
    shMem[shY][shX] =  tex1Dfetch(texMem,Idx);

     __syncthreads();     



if(xIndex==0 || xIndex== imageWidth-1 || yIndex==0 || yIndex==imageHeight-1)
        return;     

  int aLocX = loc.x, aLocY = loc.y;

    float temp=INC;

      for (int i=aLocY; i<=aLocY+2; i++)
         for (int j=aLocX; j<=aLocX+2; j++)
        temp+= shMem[shY+i][shX+j];

        outImage[Idx] = floor(temp/9);

}

Upvotes: 3

jmilloy
jmilloy

Reputation: 8365

Another way to find this information out is to use the visual profiler or nvidia's parallel nsight.

Upvotes: 0

shoosh
shoosh

Reputation: 78934

With this switch on there should be a line on the compiler output window that tells you about the number of registers and amount of shared memory.
Do you see anything at all on the compiler output window? can you copy and paste it to the question?
It should look something like

ptxas info : Used 3 registers, 2084+1060 bytes smem, 40 bytes cmem[0], 12 bytes cmem[1]

Upvotes: 4

Related Questions