Reputation: 25
I've got a big problem. My_Sendrecv blocks and consquently the program doesn't work. Suppose every process has the variables
int part_N, part_S, part_E, part_O;
int loc_r[rank], loc_c[rank];
Here's my code:
std::vector<int> ssx(loc_r[rank],0), rsx(loc_r[rank],0), sdx(loc_r[rank],0), rdx(loc_r[rank],0);
std::vector<int> sup(loc_c[rank],0), rup(loc_c[rank],0), sdwn(loc_c[rank],0), rdwn(loc_c[rank],0);
MPI_Sendrecv(&ssx[0], loc_r[rank], MPI_INT, part_O, 0, &rsx[0], loc_r[rank], MPI_INT, part_O, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
MPI_Sendrecv(&sdx[0], loc_r[rank], MPI_INT, part_E, 1, &rdx[0], loc_r[rank], MPI_INT, part_E, 1, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
MPI_Sendrecv(&sup[0], loc_c[rank], MPI_INT, part_N, 2, &rup[0], loc_c[rank], MPI_INT, part_N, 2, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
MPI_Sendrecv(&sdwn[0], loc_c[rank], MPI_INT, part_S, 3, &rdwn[0], loc_c[rank], MPI_INT, part_N, 3, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
Notice that Partner variables can be the same, that is part_O=part_E in the same process. Is this the problem?
Hope you can help me!
EDIT: The problem seems to be the allocation of memory. It's like I can allocate vectors or array up to 5 integer elements; if loc_r[rank], for example, is greater than 5 the program crashes. It doesn't depend on the number of processes. Any hint?
Upvotes: 0
Views: 561
Reputation: 4926
The code blocks because the the sendrecv's are not paired correctly.
It seems to me that the correct way to implement the circular communications in a cartesian distribution pattern, for your specifica case, would be as follows:
MPI_Sendrecv(&ssx[0], loc_r[rank], MPI_INT, part_O, 0, &rdx[0], loc_r[rank], MPI_INT, part_E, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
MPI_Sendrecv(&sdx[0], loc_r[rank], MPI_INT, part_E, 1, &rsx[0], loc_r[rank], MPI_INT, part_O, 1, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
MPI_Sendrecv(&sup[0], loc_c[rank], MPI_INT, part_N, 2, &rdwn[0], loc_c[rank], MPI_INT, part_S, 2, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
MPI_Sendrecv(&sdwn[0], loc_c[rank], MPI_INT, part_S, 3, &rup[0], loc_c[rank], MPI_INT, part_N, 3, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
That is, looking at the 1st sendrecv, "I send toward WEST the LEFT frame and receive from EAST the RIGHT one". One note: I suggest to use only one "coord system": left-right-up-down or sx-dx-su-giu :) or W-E-N-S...
Note also that in case you must not communicate anything from/to a given direction, you can just assign to part_X the special value MPI_PROC_NULL.
Upvotes: 2