Reputation: 143
I want to understand how sending and receiving are processed by MPI. Suppose I allocate a buffer of [12][50] elements as the following:
int **buf= malloc(12 * sizeof(int *));
for (i = 0; i < 12; i++)
{
buf[i] = malloc(50 * sizeof(int));
// Immediatly fill each row by 1s for testing purpose.
for (j = 0; j < 50; j++)
{
buf[i][j] = 1;
}
}
Now, I want to send each row to P = 12
processors using non-blocking MPI_Isend
and MPI_Irecv
as the following:
for (i = 0; i < 12; i++)
{
MPI_Isend(buf[i], 50, MPI_INT, i, TAG_0, MPI_COMM_WORLD, &req[i]);
MPI_Wait(&req[i], &status[i]);
}
for (i = 0; i < 12; i++)
{
MPI_Irecv(bufRecv[i], 50, MPI_INT, MASTER, TAG_0, MPI_COMM_WORLD, &req[i]);
MPI_Wait(&req[i], &status[i]);
}
As far as I know, MPI_Isend
here sends each ith row followed by 50 consecutive elements starting from the memory address stored in but[i]
whereas MPI_Irecv
receives the corresponding ith row and store it in MPI_Irecv
. Am I right? If no, can someone explain why?
Thank you.
Upvotes: 0
Views: 918
Reputation: 3819
Yes, you are right. MPI_Irecv
would receive the message into bufRecv
. However, your code does not really make sense. First off, a non-blocking communication call directly followed by a wait is basically the same as a blocking call.
Then the receiving part should be called on a different process, and from your description each receiver would call MPI_Irecv
only once, not 12 times. Or, if every process is to receive all rows, the sender has to call send for each process number of rows times.
Most importantly: what you describe is covered by MPI_Scatter, no need to implement it yourself. If you want one process to send data to all processes, use MPI_Bcast.
Upvotes: 0