Dzung Viet
Dzung Viet

Reputation: 11

How to correct Threads per block in GPU?

I am new in coding GPU using CUDA. I tried an simple example (as attached code) about using GPU. When I extracted 6 variables with same value, it displayed with other results (for example, 0 64832 64832 64832 0 64832 instead of 0 0 0 0 0 0). However, when I change value of THREADS_PER_BLOCK from 1024 to 2, it works. Can you help me to explain this phenomena?

Thank you so much!

#include "cuda_runtime.h"
#include "device_launch_parameters.h"
#include <stdio.h>
#include <time.h>
using namespace std;
#define THREADGPU (360*180) 
#define THREADS_PER_BLOCK 1024

__global__ void calculation(double *xStartDraw, double *yStartDraw, double 
*zStartDraw,
double *xEndDraw, double *yEndDraw, double *zEndDraw)
{
int index = blockIdx.x * blockDim.x + threadIdx.x;

//cout with same value
xStartDraw[index] = index;
yStartDraw[index] = index;
zStartDraw[index] = index;
xEndDraw[index] = index;
yEndDraw[index] = index;
zEndDraw[index] = index;
}

int main(int argc, char **argv)
{
double *xStartDraw, *yStartDraw, *zStartDraw;
double *xEndDraw, *yEndDraw, *zEndDraw;
int size = THREADGPU * sizeof(double);
xStartDraw = (double *)malloc(size);
yStartDraw = (double *)malloc(size);
zStartDraw = (double *)malloc(size);
xEndDraw = (double *)malloc(size);
yEndDraw = (double *)malloc(size);
zEndDraw = (double *)malloc(size);

double *d_xStartDraw, *d_yStartDraw, *d_zStartDraw;
double *d_xEndDraw, *d_yEndDraw, *d_zEndDraw;
cudaMalloc((void **)&d_xStartDraw, size);
cudaMalloc((void **)&d_yStartDraw, size);
cudaMalloc((void **)&d_zStartDraw, size);
cudaMalloc((void **)&d_xEndDraw, size);
cudaMalloc((void **)&d_yEndDraw, size);
cudaMalloc((void **)&d_zEndDraw, size);

calculation <<< (THREADGPU + (THREADS_PER_BLOCK - 1)) / THREADS_PER_BLOCK, THREADS_PER_BLOCK
    >>> (d_xStartDraw, d_yStartDraw, d_zStartDraw, d_xEndDraw, d_yEndDraw, d_zEndDraw);

cudaMemcpy(xStartDraw, d_xStartDraw, size, cudaMemcpyDeviceToHost);
cudaMemcpy(yStartDraw, d_yStartDraw, size, cudaMemcpyDeviceToHost);
cudaMemcpy(zStartDraw, d_zStartDraw, size, cudaMemcpyDeviceToHost);
cudaMemcpy(xEndDraw, d_xEndDraw, size, cudaMemcpyDeviceToHost);
cudaMemcpy(yEndDraw, d_yEndDraw, size, cudaMemcpyDeviceToHost);
cudaMemcpy(zEndDraw, d_zEndDraw, size, cudaMemcpyDeviceToHost);


    for (int iRun = 0; iRun < THREADGPU; iRun++) 
    {
    cout <<iRun<<" "<<xStartDraw[iRun] <<" "<< yStartDraw[iRun] 
<<" "<< zStartDraw[iRun] <<" "<< xEndDraw[iRun] <<" "<< yEndDraw[iRun] <<" "<< zEndDraw[iRun] << endl;
}
free(xStartDraw);
free(yStartDraw);
free(zStartDraw);
free(xEndDraw);
free(yEndDraw);
free(zEndDraw);
cudaFree(d_xStartDraw);
cudaFree(d_yStartDraw);
cudaFree(d_zStartDraw);
cudaFree(d_xEndDraw);
cudaFree(d_yEndDraw);
cudaFree(d_zEndDraw);

return 0;
}

Upvotes: 1

Views: 59

Answers (1)

Michael Kenzel
Michael Kenzel

Reputation: 15941

360*180 is not evenly divisible by 1024. So not all threads in your last block will have a valid element to work on, resulting in these threads accessing memory out of bounds. 360*180 is, however, evenly divisible by 2, which is why it works then…

Apart from that: Don't use malloc() in C++. Consider using smart pointers (e.g., std::unique_ptr) to manage memory allocations. And CUDA API calls do return an error code, which you should check…

Upvotes: 2

Related Questions