Ksi
Ksi

Reputation: 37

Matrix not received properly with MPI_Send and MPI_Recv

I am new to programing with MPI and I have an exercise where I have to multiply 2 matrices using MPI_Send and MPI_Recv while sending both matrices to my processes and sending back the result to the root process. (both matrices are square and N is equal to the number of processes).

This is the code I have written:

#include "mpi.h"
#include <stdio.h>
#include <stdlib.h>
#include <time.h>

int main(int argc, char *argv[]){
srand(time(NULL));

int rank, nproc;
MPI_Status status;

MPI_Init(&argc, &argv);

MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &nproc);

int **matrice = (int **)malloc(nproc * sizeof(int *));
for ( int i=0; i<nproc; i++)
    matrice[i] = (int *)malloc(nproc * sizeof(int));

int **matrice1 = (int **)malloc(nproc * sizeof(int *));
for (int i=0; i<nproc; i++)
    matrice1[i] = (int *)malloc(nproc * sizeof(int));

int **result = (int **)malloc(nproc * sizeof(int *));
for (int i=0; i<nproc; i++)
    result[i] = (int *)malloc(nproc * sizeof(int));

if(rank == 0){
    for(int i = 0; i < nproc; i++){
        for(int j = 0; j < nproc; j++){
            matrice[i][j] = (rand() % 20) + 1;
            matrice1[i][j] = (rand() % 20) + 1;
        }
    }
    
    for(int i = 1; i < nproc; i++){
        MPI_Send(&(matrice[0][0]), nproc*nproc, MPI_INT, i, 1, MPI_COMM_WORLD);
        MPI_Send(&(matrice1[0][0]), nproc*nproc, MPI_INT, i, 2, MPI_COMM_WORLD);
    }
    
}else{
    MPI_Recv(&(matrice[0][0]), nproc*nproc, MPI_INT, 0, 1, MPI_COMM_WORLD, &status);
    MPI_Recv(&(matrice1[0][0]), nproc*nproc, MPI_INT, 0, 2, MPI_COMM_WORLD, &status);
}
    
for(int i = 0; i < nproc; i++){
    result[i][j] = 0;
    for(int j = 0; j < nproc; j++){
        result[rank][i] += matrice[rank][j] * matrice1[j][i];       
    }
}

if(rank != 0){
    MPI_Send(&result[rank][0], nproc, MPI_INT, 0, 'p', MPI_COMM_WORLD);
}


if(rank == 0){
    for(int i = 1; i < nproc; i++){
        MPI_Recv(&result[i][0], nproc, MPI_INT, i, 'p', MPI_COMM_WORLD, &status);
    }
}

MPI_Finalize();

}

I am having problems with MPI_Send or MPI_Recv because only the first row of the matrice I receive is correct, the second row is filled with 0 and the others are random.

I don't understand what is causing this problem.

Upvotes: 1

Views: 366

Answers (1)

dreamcrash
dreamcrash

Reputation: 51443

I am having problems with MPI_Send or MPI_Recv because only the first row of the matrice I receive is correct, the second row is filled with 0 and the others are random.

You are calling the MPI_Send as follows:

MPI_Send(&(matrice[0][0]), nproc*nproc, MPI_INT, i, 1, MPI_COMM_WORLD);

so telling MPI that you will be sending nproc*nproc elements starting from the position &(matrice[0][0]). MPI_Send expects that those nproc*nproc elements are continuously allocated in memory. Therefore, your matrices should be allocated continuously in memory. You can think of the memory layout of such matrices as :

| ------------ data used in the MPI_Send -----------|
|     row1          row2         ...      rowN      |
|[0, 1, 2, 3, N][0, 1, 2, 3, N]  ... [0, 1, 2, 3, N]|
\---------------------------------------------------/

From the last element of one row to the first element of the next row there is no gap.

Unfortunately, you have allocated your matrix as:

int **matrice = (int **)malloc(nproc * sizeof(int *));
for ( int i=0; i<nproc; i++)
    matrice[i] = (int *)malloc(nproc * sizeof(int));

which does not allocate a matrix continuously in memory, but rather allocates an array of pointers which are not force to be continuously in memory. You can think of that matrix as having the following memory layout:

| ------------ data used in the MPI_Send ----------|
| row1 [0, 1, 2, 3, N] ... (some "random" stuff)   |
\--------------------------------------------------/
  row2 [0, 1, 2, 3, N] ... (some "random" stuff)
  ...
  rowN [0, 1, 2, 3, N] ... (some "random" stuff)

From the last element of one row to the first element of the next row there might be a memory gap. Consequently, making it impossible for the MPI_Send to know where the next rows starts. That is why you can receive the first row, but not the remaining rows.

Among others you can use the following approaches to solve that issue

  1. allocated the matrix continuously in memory;
  2. send the matrix row by row.

The simplest (and performance-wise better) solution would be for you to use the first approach; check this SO Thread to see how to dynamically allocate a contiguous block of memory for a 2D array.

Upvotes: 3

Related Questions