Suman Vajjala
Suman Vajjala

Reputation: 217

Strange behavior with Isend

The following is a simple code which just sends data of a processor i to i+1 using Isend and probes if all sends are complete.

Code:

std::vector<double> sendbuffer;
int myrank, nprocs;
std::vector <MPI::Request> req_v;

MPI::Init ();
nprocs = MPI::COMM_WORLD.Get_size ();
myrank = MPI::COMM_WORLD.Get_rank ();

sendbuffer.resize(20000);

int startin = 0;
if(myrank != nprocs-1)
for (int i = myrank+1; i <= (myrank+1);i++)
  {
        int sendrank = i;
        int msgtag = myrank * sendrank;
        int msgsz = sendbuffer.size();
        double *sdata = &(sendbuffer[startin]);
        MPI::Request req;
        req = MPI::COMM_WORLD.Isend (sdata, msgsz, MPI::DOUBLE, sendrank, msgtag);
        req_v.push_back (req);
  }

  printf("Size-%d %d\n", myrank,(int) req_v.size());
  if(req_v.size() > 0)
  MPI::Request::Waitall (req_v.size(), &(req_v[0]));

  printf("Over-%d\n", myrank);
  MPI::COMM_WORLD.Barrier();

  MPI::Finalize ();

The code completes for a buffer size of 1500 but for 20000 it stalls. The behavior seems a bit strange. I thought matching receives are not needed for Isend. Please suggest possible reasons for this behavior.

Upvotes: 1

Views: 108

Answers (1)

Wesley Bland
Wesley Bland

Reputation: 9062

You never actually receive the messages. Without the corresponding calls to Recv (or Irecv), the send calls may never complete.

You can get more details at this question, but generally, sends are allowed to complete without the corresponding receive having been posted as long as the messages can be buffered within MPI (either on the sender side or the receiver side). Eventually, your system will run out of buffers and the send call will stop completing until you call the corresponding receive calls and free up some system buffers.

Upvotes: 1

Related Questions