Gabs
Gabs

Reputation: 301

Cuda program for Matrix addition

I am trying to make a very simple program in order to perform matrices addition. I divided the code in two files, a main.cu file and a matrix.cuh header file. The code is:

At main.cu:

#include <iostream>
#include <cuda.h>

#include "Matriz.cuh"

using std:: cout;

int main(void)
{

    Matriz A;
    Matriz B;
    Matriz *C = new Matriz;
    int lin = 10;
    int col = 10;

    A.lin = lin;
    A.col = col;
    B.lin = lin;
    B.col = col;
    C->lin = lin;
    C->col = col;
    C->matriz = new double[lin*col];

    A.matriz = new double[lin*col];
    B.matriz = new double[lin*col];

    for (int ii = 0; ii < lin; ii++)
        for (int jj = 0; jj < col; jj++)
        {
            A.matriz[jj*A.lin + ii] = 1./(float)(10.*jj + ii + 10.0);
            B.matriz[jj*B.lin + ii] = (float)(jj + ii + 1);
        }

    somaMatriz(A, B, C);

    for (int ii = 0; ii < lin; ii++)
    {
        for (int jj = 0; jj < col; jj++)
            cout << C->matriz[jj*C->lin + jj] << " ";
        cout << "\n";
    }

    return 0;

}

At matrix.cuh:

#include <cuda.h>
#include <iostream>
using std::cout;

#ifndef MATRIZ_CUH_
#define MATRIZ_CUH_

typedef struct{
    double *matriz;
    int    lin;
    int    col;
} Matriz;

__global__ void addMatrix(const Matriz A, const Matriz B, Matriz C)
{
    int idx = threadIdx.x + blockDim.x*gridDim.x;
    int idy = threadIdx.y + blockDim.y*gridDim.y;

    C.matriz[C.lin*idy + idx] = A.matriz[A.lin*idx + idy] + B.matriz[B.lin*idx + idy];
}

void somaMatriz(const Matriz A, const Matriz B, Matriz *C)
{
    Matriz dA;
    Matriz dB;
    Matriz dC;

    int BLOCK_SIZE = A.lin;

    dA.lin = A.lin;
    dA.col = A.col;
    dB.lin = B.lin;
    dB.col = B.col;
    dC.lin = C->lin;
    dC.col = C->col;

    cudaMalloc((void**)&dA.matriz, dA.lin*dA.col*sizeof(double));
    cudaMalloc((void**)&dB.matriz, dB.lin*dB.col*sizeof(double));
    cudaMalloc((void**)&dC.matriz, dC.lin*dC.col*sizeof(double));

    cudaMemcpy(dA.matriz, A.matriz, dA.lin*dA.col*sizeof(double), cudaMemcpyHostToDevice);
    cudaMemcpy(dB.matriz, B.matriz, dB.lin*dB.col*sizeof(double), cudaMemcpyHostToDevice);

    dim3 dimBlock(BLOCK_SIZE, BLOCK_SIZE);
    dim3 dimGrid(dA.lin/dimBlock.x, dA.col/dimBlock.y);

    addMatrix<<<dimGrid, dimBlock>>>(dA, dB, dC);

    cudaMemcpy(C->matriz, dC.matriz, dC.lin*dC.col*sizeof(double), cudaMemcpyDeviceToHost);
    cudaFree(dA.matriz);
    cudaFree(dB.matriz);
    cudaFree(dC.matriz);

   return;
}

#endif /* MATRIZ_CUH_ */

What I am getting: Matrix C is filled with ones, no matter what I do. I am using this program to get an idea on how to work with matrices with variable size in a GPU program. What is wrong with my code?

Upvotes: 0

Views: 1025

Answers (1)

Robert Crovella
Robert Crovella

Reputation: 151972

Any time you're having trouble with a CUDA code, it's good practice to do proper cuda error checking and run your code with cuda-memcheck. When I run your code with cuda-memcheck, I get the indication that the kernel is trying to do out-of-bounds read operations. Since your kernel is trivially simple, it means that your indexing calculations must be incorrect.

Your program needs at least 2 changes to get it working for small square matrices:

  1. The index calculations in the kernel for A, B, and C should all be the same:

    C.matriz[C.lin*idy + idx] = A.matriz[A.lin*idx + idy] + B.matriz[B.lin*idx + idy];
    

    like this:

    C.matriz[C.lin*idy + idx] = A.matriz[A.lin*idy + idx] + B.matriz[B.lin*idy + idx];
    
  2. Your x/y index creation in the kernel is not correct:

    int idx = threadIdx.x + blockDim.x*gridDim.x;
    int idy = threadIdx.y + blockDim.y*gridDim.y;
    

    they should be:

    int idx = threadIdx.x + blockDim.x*blockIdx.x;
    int idy = threadIdx.y + blockDim.y*blockIdx.y;
    

With the above changes, I was able to get rational looking output.

Your setup code also doesn't appear to be handling larger matrices correctly:

int BLOCK_SIZE = A.lin;
...
dim3 dimBlock(BLOCK_SIZE, BLOCK_SIZE);
dim3 dimGrid(dA.lin/dimBlock.x, dA.col/dimBlock.y);

You probably want something like:

int BLOCK_SIZE = 16;
...
dim3 dimBlock(BLOCK_SIZE, BLOCK_SIZE);
dim3 dimGrid((dA.lin + dimBlock.x - 1)/dimBlock.x, (dA.col + dimBlock.y -1)/dimBlock.y);

With those changes, you should add a valid thread check to your kernel, something like this:

__global__ void addMatrix(const Matriz A, const Matriz B, Matriz C)
{
    int idx = threadIdx.x + blockDim.x*blockIdx.x;
    int idy = threadIdx.y + blockDim.y*blockIdx.y;

    if ((idx < A.col) && (idy < A.lin))
      C.matriz[C.lin*idy + idx] = A.matriz[A.lin*idx + idy] + B.matriz[B.lin*idx + idy];
}

I also haven't validated that you are comparing all dimensions correctly against appropriate row or lin limits. That is something else to verify for non-square matrices.

Upvotes: 1

Related Questions