curiousexplorer
curiousexplorer

Reputation: 1247

Understanding memory usage in CUDA

I have a NVIDIA GTX 570 graphics card running on a Ubuntu 10.10 system with Cuda 4.0.

I know that for performance, we need to access memory efficiently, and use register and shared memory on the device cleverly.

However I don't understand how to calculate, number of registers available per thread, or how much shared memory can a single block use and other such simple / important calculations for particular kernel configurations.

I want to understand this by an explicit example. Incidentally, I am currently trying to write an a particle code, in which one of the kernels should look like this.

Each block is a 1-D collection of threads, and each grid is a 1-D collection of blocks.

Within a thread I would like to store some numbers of type double. But I am not sure how many such double numbers I can store without any register spilling into local memory (which is on device). Can someone tell me how many doubles can be stored per thread for this kernel configuration?

Also is the above mentioned configuration for shared-memory for each of my blocks valid?

A sample computation about how one would go about deducing these things would be very illustrative and helpful

Here is the information about my GTX 570: (using deviceQuery from CUDA-SDK)

[deviceQuery] starting...
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Found 1 CUDA Capable device(s)

    Device 0: "GeForce GTX 570"
      CUDA Driver Version / Runtime Version          4.0 / 4.0
      CUDA Capability Major/Minor version number:    2.0
      Total amount of global memory:                 1279 MBytes (1341325312 bytes)
      (15) Multiprocessors x (32) CUDA Cores/MP:     480 CUDA Cores
      GPU Clock Speed:                               1.46 GHz
      Memory Clock rate:                             1900.00 Mhz
      Memory Bus Width:                              320-bit
      L2 Cache Size:                                 655360 bytes
      Max Texture Dimension Size (x,y,z)             1D=(65536), 2D=(65536,65535), 3D=(2048,2048,2048)
      Max Layered Texture Size (dim) x layers        1D=(16384) x 2048, 2D=(16384,16384) x 2048
      Total amount of constant memory:               65536 bytes
      Total amount of shared memory per block:       49152 bytes
      Total number of registers available per block: 32768
      Warp size:                                     32
      Maximum number of threads per block:           1024
      Maximum sizes of each dimension of a block:    1024 x 1024 x 64
      Maximum sizes of each dimension of a grid:     65535 x 65535 x 65535
      Maximum memory pitch:                          2147483647 bytes
      Texture alignment:                             512 bytes
      Concurrent copy and execution:                 Yes with 1 copy engine(s)
      Run time limit on kernels:                     Yes
      Integrated GPU sharing Host Memory:            No
      Support host page-locked memory mapping:       Yes
      Concurrent kernel execution:                   Yes
      Alignment requirement for Surfaces:            Yes
      Device has ECC support enabled:                No
      Device is using TCC driver mode:               No
      Device supports Unified Addressing (UVA):      Yes
      Device PCI Bus ID / PCI location ID:           2 / 0
      Compute Mode:
         < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

    deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 4.0, CUDA Runtime Version = 4.0, NumDevs = 1, Device = GeForce GTX 570
    [deviceQuery] test results...
    PASSED

    Press ENTER to exit...

Upvotes: 3

Views: 5107

Answers (1)

FacundoGFlores
FacundoGFlores

Reputation: 8118

So, the kernel configuration is a little complicated. You should use the CUDA OCCUPANCY CALCULATOR. And the other hand you have to study how warps work. Once a block is assigned to a SM, it is further divided into 32-thread units called warps. We can say that a warp is a unit of thread scheduling in SMs. We can calculate the number of warps that reside in a SM for a given block size and given number of blocks assigned to each SM. In your case a warp consists in 32 threads, so if you have a block with 256 threads then you have 8 warps. Now choosing a correctly kernel setting depends of your data and operations, remember that you have to full occupy a SM, that is: you have to get full thread capacity in each SM and the maximal number of warps for scheduling around the long-latency operations. Another important thing is dont exceed the limitations of up to maximum threads per blocks, in your case 1024.

Upvotes: 1

Related Questions