David Boyle
David Boyle

Reputation: 67

Why is this code ten times slower on the GPU than CPU?

I have a problem that boils down to performing some arithmetic on each element of a set of matrices. I thought this sounded like the kind of computation that could benefit greatly from being shifted onto the GPU. However, I've only succeeded in slowing down the computation by a factor of 10!

Here are the specifics of my test system:

The code below performs equivalent calcs to my production code, on the CPU and on the GPU. The latter is consistently ten times slower on my machine (CPU approx. 650ms; GPU approx. 7s).

I've tried changing the grid and block sizes; I've increased and decreased the size of the array passed to the GPU; I've run it through the visual profiler; I've tried integer data rather than doubles, but whatever I do, the GPU version is always significantly slower than the CPU equivalent.

So why is the GPU version so much slower and what changes, that I've not mentioned above, could I try to improve its performance?

Here's my command line: nvcc source.cu -o CPUSpeedTest.exe -arch=sm_30

And here's the contents of source.cu:

#include <iostream>
#include <windows.h>
#include <cuda_runtime_api.h>

void AdjustArrayOnCPU(double factor1, double factor2, double factor3, double denominator, double* array, int arrayLength, double* curve, int curveLength)
{
    for (size_t i = 0; i < arrayLength; i++)
    {
        double adjustmentFactor = factor1 * factor2 * factor3 * (curve[i] / denominator);
        array[i] = array[i] * adjustmentFactor;
    }
}

__global__ void CudaKernel(double factor1, double factor2, double factor3, double denominator, double* array, int arrayLength, double* curve, int curveLength)
{
    int idx = threadIdx.x + blockIdx.x * blockDim.x;

    if (idx < arrayLength)
    {
        double adjustmentFactor = factor1 * factor2 * factor3 * (curve[idx] / denominator);
        array[idx] = array[idx] * adjustmentFactor;
    }
}

void AdjustArrayOnGPU(double array[], int arrayLength, double factor1, double factor2, double factor3, double denominator, double curve[], int curveLength)
{
    double *dev_row, *dev_curve;

    cudaMalloc((void**)&dev_row, sizeof(double) * arrayLength);
    cudaMalloc((void**)&dev_curve, sizeof(double) * curveLength);

    cudaMemcpy(dev_row, array, sizeof(double) * arrayLength, cudaMemcpyHostToDevice);
    cudaMemcpy(dev_curve, curve, sizeof(double) * curveLength, cudaMemcpyHostToDevice);

    CudaKernel<<<100, 1000>>>(factor1, factor2, factor3, denominator, dev_row, arrayLength, dev_curve, curveLength);

    cudaMemcpy(array, dev_row, sizeof(double) * arrayLength, cudaMemcpyDeviceToHost);

    cudaFree(dev_curve);
    cudaFree(dev_row);
}

void FillArray(int length, double row[])
{
    for (size_t i = 0; i < length; i++) row[i] = 0.1 + i;
}

int main(void)
{
    const int arrayLength = 10000;

    double arrayForCPU[arrayLength], curve1[arrayLength], arrayForGPU[arrayLength], curve2[arrayLength];;

    FillArray(arrayLength, curve1);
    FillArray(arrayLength, curve2);

    ///////////////////////////////////// CPU Version ////////////////////////////////////////

    LARGE_INTEGER StartingTime, EndingTime, ElapsedMilliseconds, Frequency;

    QueryPerformanceFrequency(&Frequency);
    QueryPerformanceCounter(&StartingTime);

    for (size_t iterations = 0; iterations < 10000; iterations++)
    {
        FillArray(arrayLength, arrayForCPU);
        AdjustArrayOnCPU(1.0, 1.0, 1.0, 1.0, arrayForCPU, 10000, curve1, 10000);
    }

    QueryPerformanceCounter(&EndingTime);

    ElapsedMilliseconds.QuadPart = EndingTime.QuadPart - StartingTime.QuadPart;
    ElapsedMilliseconds.QuadPart *= 1000;
    ElapsedMilliseconds.QuadPart /= Frequency.QuadPart;
    std::cout << "Elapsed Milliseconds: " << ElapsedMilliseconds.QuadPart << std::endl;

    ///////////////////////////////////// GPU Version ////////////////////////////////////////

    cudaEvent_t start, stop;
    cudaEventCreate(&start);
    cudaEventCreate(&stop);

    cudaEventRecord(start);

    for (size_t iterations = 0; iterations < 10000; iterations++)
    {
        FillArray(arrayLength, arrayForGPU);
        AdjustArrayOnGPU(arrayForGPU, 10000, 1.0, 1.0, 1.0, 1.0, curve2, 10000);
    }

    cudaEventRecord(stop);
    cudaEventSynchronize(stop);

    float elapsedTime;
    cudaEventElapsedTime(&elapsedTime, start, stop);

    std::cout << "CUDA Elapsed Milliseconds: " << elapsedTime << std::endl;

    cudaEventDestroy(start);
    cudaEventDestroy(stop);

    return 0;
}

And here is an example of the output of CUDASpeedTest.exe

Elapsed Milliseconds: 565
CUDA Elapsed Milliseconds: 7156.76

Upvotes: 1

Views: 983

Answers (1)

David Boyle
David Boyle

Reputation: 67

What follows is likely to be embarrassingly obvious to most developers working with CUDA, but may be of value to others - like myself - who are new to the technology.

The GPU code is ten times slower than the CPU equivalent because the GPU code exhibits a perfect storm of performance-wrecking characteristics.

The GPU code spends most of its time allocating memory on the GPU, copying data to the device, performing a very, very simple calculation (that is supremely fast irrespective of the type of processor it's running on) and then copying data back from the device to the host.

As noted in the comments, if an upper bound exists on the size of the data structures being processed, then a buffer on the GPU can be allocated exactly once and reused. In the code above, this takes the GPU to CPU runtime down from 10:1 to 4:1.

The remaining performance disparity is down to the fact that the CPU is able to perform the required calculations, in serial, millions of times in a very short time span due to its simplicity. In the code above, the calculation involves reading a value from an array, some multiplication, and finally an assignment to an array element. Something this simple must be performed millions of times before the benefits of doing so in parallel outweigh the necessary time penalty of transferring the data to the GPU and back. On my test system, a million array elements is the break even point, where GPU and CPU perform in (approximately) the same amount of time.

Upvotes: 4

Related Questions