CUDA. Unable to use grid with maxGridSizes

Optimizing my code to use the most of CUDA card bumped on next.
Even though every source of information tells that the grid could be of (65535,65535,65535) size using 2.x compute capability, I'm unable to use the grid bigger than (65535,8192,1) size.
Example code shows that even using blockSize equal to (1,1,1) and empty kernel it causes the error "code=4(cudaErrorLaunchFailure)" when run with a grid bigger than the mentioned size.

OS: Win10Pro
HW: GTS 450
SDK: CUDA 8.0, VS2013CE (using through path of nvcc -ccbin options)
The test code:

#include <helper_cuda.h>

__global__ void KernelTest()
{}

int main()
{
    int cudaDevice=0;  
    int driverVersion = 0, runtimeVersion = 0;
    int deviceCount = 0;
    cudaError_t error_id = cudaGetDeviceCount(&deviceCount);

    if (error_id != cudaSuccess)
    {
        printf ("cudaGetDeviceCount returned %d\n-> %s\n", (int)error_id, cudaGetErrorString(error_id));
        printf ("Result = FAIL\n");
        exit(EXIT_FAILURE);
    }

    // This function call returns 0 if there are no CUDA capable devices.
    if (deviceCount == 0)
    {
        printf("There are no available device(s) that support CUDA\n");
    }
    else
    {
        printf ("Detected %d CUDA Capable device(s)\n", deviceCount);
    }

    cudaSetDevice(cudaDevice);
    cudaDeviceProp deviceProp;
    cudaGetDeviceProperties(&deviceProp, cudaDevice);

    cudaDriverGetVersion(&driverVersion);
    cudaRuntimeGetVersion(&runtimeVersion);
    printf("  CUDA Driver Version / Runtime Version          %d.%d / %d.%d\n", driverVersion/1000, (driverVersion%100)/10, runtimeVersion/1000, (runtimeVersion%100)/10);
    printf("  CUDA Capability Major/Minor version number:    %d.%d\n", deviceProp.major, deviceProp.minor);

    char msg[256];
    …
    //Code from deviceQuery
    …


const char *sComputeMode[] =
{
    "Default (multiple host threads can use ::cudaSetDevice() with device simultaneously)",
    "Exclusive (only one host thread in one process is able to use ::cudaSetDevice() with this device)",
    "Prohibited (no host thread can use ::cudaSetDevice() with this device)",
    "Exclusive Process (many threads in one process is able to use ::cudaSetDevice() with this device)",
    "Unknown",
    NULL
};
    printf("  Compute Mode:\n");
    printf("     < %s >\n", sComputeMode[deviceProp.computeMode]);


    //dim3 gridtest(deviceProp.maxGridSize[0]-1, deviceProp.maxGridSize[1]-1, deviceProp.maxGridSize[2]-1);
    dim3 gridtest(deviceProp.maxGridSize[0], 1, 1);
    dim3 blocktest(1);
    KernelTest<<<gridtest,blocktest>>>();
    cudaDeviceSynchronize();
    checkCudaErrors(cudaPeekAtLastError (  ));

    dim3 gridtest2(deviceProp.maxGridSize[0]/2, 2, 1);
    KernelTest<<<gridtest2,blocktest>>>();
    cudaDeviceSynchronize();
    checkCudaErrors(cudaPeekAtLastError (  ));

    dim3 gridtest3(deviceProp.maxGridSize[0]/4, 4, 1);
    KernelTest<<<gridtest3,blocktest>>>();
    cudaDeviceSynchronize();
    checkCudaErrors(cudaPeekAtLastError (  ));

    dim3 gridtest4(deviceProp.maxGridSize[0], 2, 1);
    KernelTest<<<gridtest4,blocktest>>>();
    cudaDeviceSynchronize();
    checkCudaErrors(cudaPeekAtLastError (  ));

    dim3 gridtest5(deviceProp.maxGridSize[0], 4, 1);
    KernelTest<<<gridtest5,blocktest>>>();
    cudaDeviceSynchronize();
    checkCudaErrors(cudaPeekAtLastError (  ));

    dim3 gridtest6(deviceProp.maxGridSize[0], (deviceProp.maxGridSize[1]+1)/16, 1);//4096
    KernelTest<<<gridtest6,blocktest>>>();
    cudaDeviceSynchronize();
    checkCudaErrors(cudaPeekAtLastError (  ));

    dim3 gridtest7(deviceProp.maxGridSize[0], (deviceProp.maxGridSize[1]+1)/8, 1);//8192
    KernelTest<<<gridtest7,blocktest>>>();
    cudaDeviceSynchronize();
    checkCudaErrors(cudaPeekAtLastError (  ));

    dim3 gridtest8(deviceProp.maxGridSize[0], (deviceProp.maxGridSize[1]+1)/4, 1);//16384 - Causes Error
    KernelTest<<<gridtest8,blocktest>>>();
    cudaDeviceSynchronize();
    checkCudaErrors(cudaPeekAtLastError (  ));

//    dim3 gridtest9(deviceProp.maxGridSize[0], deviceProp.maxGridSize[1], 1);
//    KernelTest<<<gridtest9,blocktest>>>();
//    cudaDeviceSynchronize();
//    checkCudaErrors(cudaPeekAtLastError (  ));


    cudaDeviceReset() ;
}

Output of deviceQuery part:

CUDA Driver Version / Runtime Version          9.1 / 8.0
  CUDA Capability Major/Minor version number:    2.1
  Total amount of global memory:                 1024 MBytes (1073741824 bytes)
  ( 4) Multiprocessors, ( 48) CUDA Cores/MP:     192 CUDA Cores
  GPU Max Clock rate:                            1566 MHz (1.57 GHz)
  Memory Clock rate:                             1804 Mhz
  Memory Bus Width:                              128-bit
  L2 Cache Size:                                 262144 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65535), 3D=(2048, 2048, 2048)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 32768
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  1536
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (65535, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  CUDA Device Driver Mode (TCC or WDDM):         WDDM (Windows Display Driver Model)
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

Upvotes: 0

Views: 662

Answers (1)

talonmies
talonmies

Reputation: 72348

The key piece of information in your question is this:

CUDA Device Driver Mode (TCC or WDDM):         WDDM (Windows Display Driver Model)

Because you are using a WDDM device, there is a time limit on how much wall clock time a kernel can consume imposed by the display driver. If you exceed this, the driver will kill your kernel.

That is what is happening here (cudaErrorLaunchFailure confirms this). Scheduling huge numbers of blocks isn't free and even a null kernel can take many seconds to complete if you are scheduling a lot of blocks. In your case this is being exacerbated by the small, old GPU you are using, which can only run 32 blocks simultaneously, meaning that there are a lot of block scheduling trips to the driver to finish running your kernel launch when you have requested between several hundred million and a billion blocks to be run in a single kernel launch.

For reference, here is the profiler output on a non-display GPU hosted on a Linux system, which has a much larger number of total resident blocks than your GPU (416 vs 32):

   Start  Duration            Grid Size      Block Size     Regs*    SSMem*    DSMem*           Device   Context    Stream  Name
235.86ms  139.29us          (65535 1 1)         (1 1 1)         8        0B        0B  GeForce GTX 970         1         7  KernelTest(void) [106]
236.03ms  138.49us          (32767 2 1)         (1 1 1)         8        0B        0B  GeForce GTX 970         1         7  KernelTest(void) [109]
236.19ms  138.46us          (16383 4 1)         (1 1 1)         8        0B        0B  GeForce GTX 970         1         7  KernelTest(void) [112]
236.35ms  275.58us          (65535 2 1)         (1 1 1)         8        0B        0B  GeForce GTX 970         1         7  KernelTest(void) [115]
236.65ms  550.09us          (65535 4 1)         (1 1 1)         8        0B        0B  GeForce GTX 970         1         7  KernelTest(void) [118]
237.22ms  504.49ms       (65535 4096 1)         (1 1 1)         8        0B        0B  GeForce GTX 970         1         7  KernelTest(void) [121]
741.79ms  924.72ms       (65535 8192 1)         (1 1 1)         8        0B        0B  GeForce GTX 970         1         7  KernelTest(void) [124]
1.66659s  1.84941s      (65535 16384 1)         (1 1 1)         8        0B        0B  GeForce GTX 970         1         7  KernelTest(void) [127]

You can see that the 65535 x 16384 case takes 1.8 seconds to run. On your GPU that will be much longer. Hopefully you will also conclude from this that running large numbers of blocks is not an optimization, because block scheduling is not zero cost.

Upvotes: 4

Related Questions