Reputation: 491
EDIT #1:
so SOLUTION IS:
line
MPI_Gatherv(buffer, rank, MPI_INT, buffer, receive_counts, receive_displacements, MPI_INT, 0, MPI_COMM_WORLD);
has to be changed to
MPI_Gatherv(buffer, receive_counts[rank], MPI_INT, buffer, receive_counts, receive_displacements, MPI_INT, 0, MPI_COMM_WORLD);
thank you again for help
ORIGINAL POST:
my code is from DeinoMPI
when I run mpiexec -localonly 4 skusamGatherv.exe, everithing is ok.
if I change line
int receive_counts[4] = { 0, 1, 2, 3 };
to
int receive_counts[4] = { 0, 1, 2, 1 };
compiling is still ok, but when i run mpiexec -localonly 4 skusamGatherv.exe I'll get error
I thing it suppose to work
Thanks for help
I'll get error:
Fatal error in MPI_Gatherv: Message truncated, error stack:
MPI_Gatherv(363)........................: MPI_Gatherv failed(sbuf=0012FF4C, scou
nt=0, MPI_INT, rbuf=0012FF2C, rcnts=0012FEF0, displs=0012FED8, MPI_INT, root=0,
MPI_COMM_WORLD) failed
MPIDI_CH3_PktHandler_EagerShortSend(351): Message from rank 3 and tag 4 truncate
d; 12 bytes received but buffer size is 4
unable to read the cmd header on the pmi context, Error = -1
.
0. [0][0][0][0][0][0] , [0][0][0][0][0][0]
Error posting readv, An existing connection was forcibly closed by the remote ho
st.(10054)
unable to read the cmd header on the pmi context, Error = -1
.
Error posting readv, An existing connection was forcibly closed by the remote ho
st.(10054)
1. [1][1][1][1][1][1] , [0][0][0][0][0][0]
unable to read the cmd header on the pmi context, Error = -1
.
Error posting readv, An existing connection was forcibly closed by the remote ho
st.(10054)
2. [2][2][2][2][2][2] , [0][0][0][0][0][0]
unable to read the cmd header on the pmi context, Error = -1
.
Error posting readv, An existing connection was forcibly closed by the remote ho
st.(10054)
3. [3][3][3][3][3][3] , [0][0][0][0][0][0]
job aborted:
rank: node: exit code[: error message]
0: jan-pc-nb: 1: Fatal error in MPI_Gatherv: Message truncated, error stack:
MPI_Gatherv(363)........................: MPI_Gatherv failed(sbuf=0012FF4C, scou
nt=0, MPI_INT, rbuf=0012FF2C, rcnts=0012FEF0, displs=0012FED8, MPI_INT, root=0,
MPI_COMM_WORLD) failed
MPIDI_CH3_PktHandler_EagerShortSend(351): Message from rank 3 and tag 4 truncate
d; 12 bytes received but buffer size is 4
1: jan-pc-nb: 1
2: jan-pc-nb: 1
3: jan-pc-nb: 1
Press any key to continue . . .
My code:
#include "mpi.h"
#include <stdio.h>
int main(int argc, char *argv[])
{
int buffer[6];
int rank, size, i;
int receive_counts[4] = { 0, 1, 2, 3 };
int receive_displacements[4] = { 0, 0, 1, 3 };
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (size != 4)
{
if (rank == 0)
{
printf("Please run with 4 processes\n");fflush(stdout);
}
MPI_Finalize();
return 0;
}
for (i=0; i<rank; i++)
{
buffer[i] = rank;
}
MPI_Gatherv(buffer, rank, MPI_INT, buffer, receive_counts, receive_displacements, MPI_INT, 0, MPI_COMM_WORLD);
if (rank == 0)
{
for (i=0; i<6; i++)
{
printf("[%d]", buffer[i]);
}
printf("\n");
fflush(stdout);
}
MPI_Finalize();
return 0;
}
Upvotes: 1
Views: 2734
Reputation: 5223
Step back and consider what MPI_Gatherv is doing: it's an MPI_Gather (in this case to rank 0) where each processor can send different amounts of data.
In your example, rank 0 sends 0 ints, rank 1 sends 1 int, rank 2 sends 2 ints, and rank 3 sends 3 ints.
MPIDI_CH3_PktHandler_EagerShortSend(351): Message from rank 3 and tag 4 truncated; 12 bytes received but buffer size is 4
it's buried in a lot of other information, but it's saying that rank 3 sent 3 ints (12 bytes) but rank 0 only had room for 1 int.
Look at the first three arguments to gatherv: 'buffer, rank, MPI_INT'. No matter what you set receive to, rank 3 will always send 3 ints.
note that you can under-fill a buffer (you could have made the last item in receive_counts
100, say), but you told the MPI library with the smaller receive_counts[3]
to only expect 1 int, even though you sent 3.
Upvotes: 1