FullMetalScientist
FullMetalScientist

Reputation: 71

Can you specify which buffer to send to in MPI?

Say, if you did two MPI_Sends and two MPI_Recvs, can you specify which buffer the contents of the MPI_Send will go to, if you have two different buffers available and each of the MPI_Recvs receives from each of the buffers?

if(rank == MASTER) {
   for(kk=1; kk < size; kk++) { 
      MPI_Recv(printbuf, numcols, MPI_DOUBLE, kk, tag, MPI_COMM_WORLD, &status); 
      sum = sum + printbuf[kk-1];
    }
 }
 else {
   MPI_Send(sum, local_ncols + 2, MPI_DOUBLE, MASTER, tag, MPI_COMM_WORLD);
 }
 if(rank == MASTER) {
   for(kk=1; kk < size; kk++) {   
     MPI_Recv(secondbuf, numcols, MPI_DOUBLE, kk, tag, MPI_COMM_WORLD, &status); 
     sum2 = sum2 + secondbuf[kk-1];
   }
 }
 else {
   MPI_Send(sum2, local_ncols + 2, MPI_DOUBLE, MASTER, tag, MPI_COMM_WORLD);
 }

This is with OpenMPI. Each rank calculates sum and sum2. I want to obtain the summation of all the ranks' sums and sum2s. Is there a better way? For instance, by being able to specify which buffer to send sum and sum2, can the below code be condensed?

Upvotes: 1

Views: 827

Answers (1)

Jonathan Dursi
Jonathan Dursi

Reputation: 50927

Angelos's comment is right: if you're trying to sum results from all processes, MPI_Reduce (e.g., this Q/A, amongst others) is the way to go.

Answering the general question as posed, though: when you send a message, you can't control what the receiving process does with it - all you're doing is sending it. However, there is one piece of metadata you can control beside which task you're sending it to: the tag. The receiving process can decide what buffer to receive into based on the tag:

if (rank != MASTER) {
    // worker process
    MPI_Send(sum1, local_ncols + 2, MPI_DOUBLE, MASTER, tag1, MPI_COMM_WORLD);
    MPI_Send(sum2, local_ncols + 2, MPI_DOUBLE, MASTER, tag2, MPI_COMM_WORLD);
} else {
    // master process
    MPI_Recv(printbuf, numcols, MPI_DOUBLE, kk, tag1, MPI_COMM_WORLD, &status); 
    MPI_Recv(secondbuf, numcols, MPI_DOUBLE, kk, tag2, MPI_COMM_WORLD, &status); 
}

There are other approaches you can take, as well. You can use MPI's non-overtaking guarantee, which states that two messages from task1 to task2 with the be received in the same order as they are sent if they can be received by the same receive - this ensures that the receiving process can distinguish the first message sent from the second one.

Another approach which can work in some contexts is to pack them into the same message (this can be useful for short messages to reduce latency) and unpack them on the receiving side; this is another way of ensuring that you know which piece of data is going into which buffer.

Upvotes: 4

Related Questions