pranith
pranith

Reputation: 879

How to measure overhead of a kernel launch in CUDA

I want to measure the overhead of a kernel launch in CUDA.

I understand that there are various parameters which affect this overhead. I am interested in the following:

I am doing this mainly to measure the advantage of using managed memory which has been introduced in CUDA 6.0. I will update this question with the code I develop and from the comments. Thanks!

Upvotes: 5

Views: 3362

Answers (2)

Vitality
Vitality

Reputation: 21515

How to measure the overhead of a kernel launch in CUDA is dealt with in Section 6.1.1 of the "CUDA Handbook" book by N. Wilt. The basic idea is to launch an empty kernel. Here is a sample code snippet

#include <stdio.h>

__global__ void EmptyKernel() { }

int main() {

    const int N = 100000;

    float time, cumulative_time = 0.f;
    cudaEvent_t start, stop;
    cudaEventCreate(&start);
    cudaEventCreate(&stop);

    for (int i=0; i<N; i++) { 

        cudaEventRecord(start, 0);
        EmptyKernel<<<1,1>>>(); 
        cudaEventRecord(stop, 0);
        cudaEventSynchronize(stop);
        cudaEventElapsedTime(&time, start, stop);
        cumulative_time = cumulative_time + time;

    }

    printf("Kernel launch overhead time:  %3.5f ms \n", cumulative_time / N);
    return 0;
}

On my laptop GeForce GT540M card, the kernel launch overhead is 0.00245ms.

If you want to check the dependence of this time from the number of threads launched, then just change the kernel launch configuration <<<*,*>>>. It appears that the timing does not significantly change with the number of threads launched, which is consistent with the statement of the book that most of that time is spent in the driver.

Upvotes: 6

Alejandro Silvestri
Alejandro Silvestri

Reputation: 3784

Perhaps you should be interested in these test results from the University of Virginia:

Memory transfer overhead: http://www.cs.virginia.edu/~mwb7w/cuda_support/memory_transfer_overhead.html

Kernel launch overhead: http://www.cs.virginia.edu/~mwb7w/cuda_support/kernel_overhead.html

They were measured in a similar way to JackOLantern proposal.

Upvotes: 2

Related Questions