Vyacheslav
Vyacheslav

Reputation: 113

Local sender-receiver reduction operation MPI

I have a periodic cartesian grid of MPI proc's, for 4 proc's the layout looks like this

 __________________________
|       |        |        |
|  0    |   1    |   2    |
|_______|________|_______ |
|       |        |        |
|  3    |   4    |   5    |
|_______|________|________|

where the numbers are the ranks of the proc's in the communicator. During the calculation all proc'have to send a number to its left neighbour, and this number should be summed up with the one that the left neighbour already has :

int a[2];
a[0] = calculateSomething1();
a[1] = calculateSomething2(); 
int tempA;
MPI_Request recv_request, send_request;

//Revceive from the right neighbour
MPI_Irecv(&tempA, 1, MPI_INT, myMPI.getRightNeigh(), 0, Cart_comm, &recv_request);

//Send to the left neighbour
MPI_Isend(&a[0], 1, MPI_INT, myMPI.getLeftNeigh(), 0, Cart_comm, &send_request);
MPI_Status status;

MPI_Wait(&recv_request, &status);
MPI_Wait(&send_request, &status);

//now I have to do something like this
a[1] += tempA;

I am wondering if there is a sort of "local" reduction operation only for a pair sender-receiver or the only solution is to have "local" communicators and use collective operations there?

Upvotes: 0

Views: 70

Answers (1)

Zulan
Zulan

Reputation: 22670

You can use MPI_Sendrecv in this case. It is basically made for this case.

I don't think you would get any benefit from using collectives.

BTW: Your code is not probably correct. You are sending from a local stack variable &a[0]. You must complete the communication send_request before exiting scope and reusing a's memory. This is done by some form of MPI_Wait(all) or a successful MPI_Test.

Upvotes: 1

Related Questions