Burak Keceli
Burak Keceli

Reputation: 953

MPI Broadcasting dynamic 2D array to other processors

I have searched many explanations, but I think I could not figure out how I can do this situation. I want to do something like that: Using master processor, I am creating a dynamic 2D array. Then I want to;

1- send this array to other processors. And each processor prints this 2D array 2- send part of this array to others. And each processor prints their part to the screen.

For example; I have 2D array 11*11 and 4 processors. rank 0 is master. Others are slaves. For the first situation, I want to send all array to rank 1, rank 2, and rank 3. And for the second situation, I want to share rows to slaves. 11/3 = 3. Therefore rank 1 takes 3 rows, rank 2 takes 3 rows, and rank 3 takes 5 rows.

Here is my code:

int processorID;  
int numberOfProcessors;  


int main(int argc, char* argv[]){

    MPI_Init(&argc, &argv);  
    MPI_Comm_size(MPI_COMM_WORLD ,&numberOfProcessors);  
    MPI_Comm_rank(MPI_COMM_WORLD ,&processorID);

    double **array;

    if(MASTER){
        array = (double**) malloc(11*sizeof(double *));
        for(i=0; i<11; i++){
            array[i] =  (double *) malloc(11*sizeof(double));
        }

        for(i=0; i<11; i++){
            for(j=0; j<11; j++){
                array[i][j] = i*j;
            }

        }
    }
    MPI_Bcast(array, 11*11, MPI_DOUBLE, 0, MPI_COMM_WORLD);
    if(SLAVE){
        for(i=0; i<11; i++){
            for(j=0; j<11; j++){
                printf("%f ", array[i][j]);
            }

        }
    }

    MPI_Finalize();  
    return 0;  
}

according to these link ; MPI_Bcast a dynamic 2d array ; I need to create my array as;

if (MASTER){

    array = (double**) malloc(121*sizeof(double))

        for(i=0; i<11; i++){
            for(j=0; j<11; j++){
                array[i][j] = i*j;  // this is not working.
            }
        }
}

but if I do this, I cannot initialize each member in the array. Inner for loops are not working. I could not find any way to solve it.

And for my second question, I followed this link sending blocks of 2D array in C using MPI . I think I need to change the inside of if(SLAVE). I should create 2D subArrays for each slave processor. And I need to use MPI_Scatterv. But I could not understand completely.

int main() {

    ...
    ...

    MPI_Scatterv() // what must be here?
    if(SLAVE){
        if(processorID = numberOfProcessor-1){
            subArray = (double**) malloc(5*sizeof(double *)); // beacuse number of row for last processor is 5
            for(i=0; i<11; i++){
                array[i] =  (double *) malloc(11*sizeof(double));
            }
        }
        else {
            subArray = (double**) malloc(3*sizeof(double *));
            for(i=0; i<11; i++){
                array[i] =  (double *) malloc(11*sizeof(double));
            }
        }
    }
}

Upvotes: 1

Views: 3411

Answers (2)

talonmies
talonmies

Reputation: 72350

You can't use an array of pointers (what you are incorrectly calling a "2D array"), because each row pointer is not portable to another node's address space.

The code you quoted for creating a 2D array in a linear memory allocation is perfectly correct, all you need to do is index that memory in row major order, so that the loops become:

double* array = (double*) malloc(121*sizeof(double)); 
if (MASTER){
    for(i=0; i<11; i++){
        for(j=0; j<11; j++){
            array[j+i*11] = i*j;  // row major indexing here
        }
    }
}

/* Scatter code follows */

You can safely scatter this sort of array to multiple nodes.

Upvotes: 1

lethal-guitar
lethal-guitar

Reputation: 4519

C does not really have multidimensional arrays. I would recommend storing your values in a regular 1D buffer, and then calculating correct indices from 1D values. Like so:

double* data = (double*)malloc(sizeof(double)*11*11);

// Now, to access data[i][j], simply do:
data[j + i*11] = ...; // this "maps" 2D indices into 1D

This will spare you all the trouble with the hierarchical malloc-ing, and it can easily be passed to the MPI APIs.

Upvotes: 1

Related Questions