Jose Vega
Jose Vega

Reputation: 10258

printf inside CUDA __global__ function

I am currently writing a matrix multiplication on a GPU and would like to debug my code, but since I can not use printf inside a device function, is there something else I can do to see what is going on inside that function. This my current function:

__global__ void MatrixMulKernel(Matrix Ad, Matrix Bd, Matrix Xd){

    int tx = threadIdx.x;
    int ty = threadIdx.y;

    int bx = blockIdx.x;
    int by = blockIdx.y;

    float sum = 0;

    for( int k = 0; k < Ad.width ; ++k){
        float Melement = Ad.elements[ty * Ad.width + k];
        float Nelement = Bd.elements[k * Bd.width + tx];
        sum += Melement * Nelement;
    }

    Xd.elements[ty * Xd.width + tx] = sum;
}

I would love to know if Ad and Bd is what I think it is, and see if that function is actually being called.

Upvotes: 40

Views: 94499

Answers (4)

M. Tibbits
M. Tibbits

Reputation: 8640

CUDA now supports printfs directly in the kernel.

NVIDIA's docs online, Formatted Output section.

Formatted output is only supported by devices of compute capability 2.x and higher.

int printf(const char *format[, arg, ...]);

For past versions' docs, see this page.

Upvotes: 84

Tom
Tom

Reputation: 21128

EDIT

To avoid misleading people, as M. Tibbits points out printf is available in any GPU of compute capability 2.0 and higher.

END OF EDIT

You have choices:

  • Use a GPU debugger, i.e. cuda-gdb on Linux or Nexus on Windows
  • Use cuprintf, which is available for registered developers (sign up here)
  • Manually copy the data that you want to see, then dump that buffer on the host after your kernel has completed (remember to synchronise)

Regarding your code snippet:

  • Consider passing the Matrix structs in via pointer (i.e. cudaMemcpy them to the device, then pass in the device pointer), right now you will have no problem but if the function signature gets very large then you may hit the 256 byte limit
  • You have inefficient reads from Ad, you will have a 32-byte transaction to the memory for each read into Melement - consider using shared memory as a staging area (c.f. the transposeNew sample in the SDK)

Upvotes: 17

Andrei Pokrovsky
Andrei Pokrovsky

Reputation: 3846

See "Formatted output" (currently B.17) section of CUDA C Programming Guide.

http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html

Upvotes: 2

Juan Leni
Juan Leni

Reputation: 7608

by the way..

Upvotes: 4

Related Questions