user4352158
user4352158

Reputation: 729

how to make MPI_Send have processors send in order instead of randomly?

I am trying to run a program below that uses parallel programming. If we use 4 processors, I want them to contain the sums 1+2=3, 3+4=7, 11, and 15. So I want the sumvector to contain 3, 7, 11, and 15, in that order. HOwever, since MPI_Send has the processors sending in random order, I don't sumvector to contain, say, 7, 15, 3, 11. How can I modify the code below to ensure this?

#include<iostream>
#include<mpi.h>


using namespace std;

int main(int argc, char *argv[]){

    int mynode, totalnodes;
    int sum,startval,endval,accum;
    MPI_Status status;
    int master=3; 

    MPI_Init(&argc,&argv);
    MPI_Comm_size(MPI_COMM_WORLD, &totalnodes); // get totalnodes
    MPI_Comm_rank(MPI_COMM_WORLD, &mynode); // get mynode

    sum = 0; // zero sum for accumulation
    vector <int> sumvector;
    startval = 8*mynode/totalnodes+1;
    endval = 8*(mynode+1)/totalnodes;

    for(int i=startval;i<=endval;i=i+1)
        sum=sum+i;
        sumvector.push_back(sum);

    if(mynode!=master)
    {
        MPI_Send(&sum,1,MPI_INT,master,1,MPI_COMM_WORLD); //#9, p.92
    }
    else
    {
        for(int j=0;j<totalnodes;j=j+1){
            if (j!=master)
            {
                MPI_Recv(&accum,1,MPI_INT,j,1,MPI_COMM_WORLD, &status);
                printf("processor %d  received from %d\n",mynode, j);
                sum = sum + accum;
            }
        }
    }

Am I better off using multithreading instead of MPI?

Upvotes: 0

Views: 369

Answers (2)

Hristo Iliev
Hristo Iliev

Reputation: 74395

I'm not sure what you want to do, but your current code is equivalent (sans printing what number was received from which rank) to the following one:

for(int i=startval;i<=endval;i=i+1)
    sum=sum+i;
sumvector.push_back(sum);

MPI_Reduce(mynode == master ? MPI_IN_PLACE : &sum, &sum, 1, MPI_INT,
           master, MPI_COMM_WORLD);

What you are looking for is either this (the result is gathered by the master rank only):

for(int i=startval;i<=endval;i=i+1)
    sum=sum+i;

sumvector.resize(totalnodes);

MPI_Gather(&sum, 1, MPI_INT, &sumvector[0], 1, MPI_INT,
           master, MPI_COMM_WORLD);

or this (results are gathered into all ranks):

for(int i=startval;i<=endval;i=i+1)
    sum=sum+i;

sumvector.resize(totalnodes);

MPI_Allgather(&sum, 1, MPI_INT, &sumvector[0], 1, MPI_INT, MPI_COMM_WORLD);

Also, the following statement is entirely wrong:

HOwever, since MPI_Send has the processors sending in random order, I don't sumvector to contain, say, 7, 15, 3, 11.

MPI point-to-point communication requires exactly two things in order to succeed: there must be a sender that executes MPI_Send and a receiver that executes a matching MPI_Recv. Message reception order could be enforced by simply calling MPI_Recv in a loop with increasing source rank, exactly as it is in the code you've shown.

Upvotes: 2

David Trevelyan
David Trevelyan

Reputation: 166

There are a number of ways you can do this more simply. To start with, you could "gather" the values into a vector on the master process:

std::vector <int> sumvector, recvcounts, displs;
startval = 8*mynode/totalnodes+1;
endval = 8*(mynode+1)/totalnodes;

for (int i=0; i<totalnodes; i++)
{
    sumvector.push_back(0);
    recvcounts.push_back(1);
    displs.push_back(i);
}

int myval = startval + endval;
MPI_Gatherv(&myval,
            1,
            MPI_INTEGER,
            &sumvector[0],
            &recvcounts[0],
            &displs[0],
            MPI_INTEGER,
            master,
            MPI_COMM_WORLD);

which results in sumvector containing:

node 0: (0, 0, 0, 0)
node 1: (0, 0, 0, 0)
node 2: (0, 0, 0, 0)
node 3: (3, 7, 11, 15)

You could also consider MPI_Allreduce instead. The process would go as follows:

Initialise all vector elements to 0,

for (int i=0; i<totalnodes; i++)
{
    sumvector.push_back(0);
}

and modify mynode's entry to be the value you want,

sumvector[mynode] = startval + endval;

Before MPI_Allreduce, the sumvectors contain:

node 0: (3, 0, 0, 0)
node 1: (0, 7, 0, 0)
node 2: (0, 0, 11, 0)
node 3: (0, 0, 0, 15)

Now, when you sum all of the arrays on each node,

MPI_Allreduce(MPI_IN_PLACE, 
              &sumvector[0], 
              totalnodes, 
              MPI_INTEGER, 
              MPI_SUM, 
              MPI_COMM_WORLD);

it results in sumvector containing:

node 0: (3, 7, 11, 15)
node 1: (3, 7, 11, 15)
node 2: (3, 7, 11, 15)
node 3: (3, 7, 11, 15)

on every node.

Upvotes: 0

Related Questions