Reputation: 1123
I'm learning MPI and I don't know if it is OK to send or receive multiple messages using the same array.
Every processor needs to send different chunks of its local data to the other processors (and receive their data, too).
Say I have four processors. I use this:
MPI_Request sendReqs[4];
MPI_Status sendStats[4];
MPI_Request recReqs[4];
MPI_Status recStats[4];
// locData is an array of integers. I need to distribute chunks of it
// assume I have four processors
for( i = 0 ; i < 4; i++) {
send = locData + indices[2*i]; // location at buffer to start ith send
count = indices[2*i+1] - indices[2*i] + 1; // how many we send to i
MPI_Isend(send, count, MPI_INT, i, rank, MPI_COMM_WORLD, sendReqs[i] );
}
similarly, would this be OK?
// gatheredData is where each processor stores data it gets form others
// assume I have four processors
for( i = 0 ; i < 4; i++) {
start = gatheredData + gatheredIndices[2*i];
count = gatheredIndices[2*i+1] - gatheredIndices[2*i] + 1;
MPI_Irecv(start, count, MPI_INT, i, i, MPI_COMM_WORLD, &recReqs[i]);
}
I finish with this this call, to make sure everybody got the data
MPI_Waitall(4, sendReqs, sendStats);
MPI_Waitall(4, recReqs , recStats );
This does not work - the processors show they have some junk values inside of gatheredData (as well as some good values).
Upvotes: 3
Views: 6280
Reputation: 1755
Any point-to-point sending operation can use the same buffer as any other send, since they're not modifying the buffer's contents.
On the receiving side, results are undefined and may lead the program to crash if you don't guarantee that each receiving operation has a buffer that doesn't overlap with those used by any other sends or receives, from the time the receive is posted to the time it's completed. Your code using MPI_Irecv
looks OK provided that there's no overlap in gatheredIndices
.
Upvotes: 2