Călin
Călin

Reputation: 337

Is it possible to force MPI to always block on send?

Is there a way to force MPI to always block on send? This might be useful when looking for deadlocks in a distributed algorithm which otherwise depends on the buffering MPI might choose to do on send.

For example, the following program (run with 2 processes) works without problems on my machine:

// C++
#include <iostream>
#include <thread>

// Boost
#include <boost/mpi.hpp>
namespace mpi = boost::mpi;

int main() {
    using namespace std::chrono_literals;

    mpi::environment env;
    mpi::communicator world;
    auto me = world.rank();
    auto other = 1 - me;

    char buffer[10] = {0};

    while (true) {
        world.send(other, 0, buffer);
        world.recv(other, 0, buffer);
        std::cout << "Node " << me << " received" << std::endl;

        std::this_thread::sleep_for(200ms);
    }
}

But if I change the size of the buffer to 10000 it blocks indefinitely.

Upvotes: 1

Views: 283

Answers (3)

Harald
Harald

Reputation: 3180

You can consider tuning the eager limit value (http://blogs.cisco.com/performance/what-is-an-mpi-eager-limit) to force that send operation to block on any message size. The way to establish the eager limit, depends on the MPI implementation. On Intel MPI you can use the I_MPI_EAGER_THRESHOLD environment variable (see https://software.intel.com/sites/products/documentation/hpc/ics/impi/41/lin/Reference_Manual/Communication_Fabrics_Control.htm), for instance.

Upvotes: 1

Gilles
Gilles

Reputation: 9489

For pure MPI codes, what you describe is exactly what MPI_Ssend() gives you. However, here, you are not using pure MPI, you are using boost::mpi. And unfortunately, according to boost::mpi's documentation, MPI_Ssend() isn't supported.

That said, maybe boost::mpi offers another way, but I doubt it.

Upvotes: 2

Eran
Eran

Reputation: 22020

If you want blocking behavior, use MPI_Ssend. It will block until a matching receive has been posted, without buffering the request. The amount of buffering provided by MPI_Send is (intentionally) implementation specific. The behavior you get for a buffer of 10000 may differ when trying a different implementation.

I don't know if you can actually tweak the buffering configuration, and I wouldn't try because it would not be portable. Instead, I'd try to use the MPI_Ssend variant in some debug configuration, and use the default MPI_Send when best performance are needed.

(disclaimer: I'm not familiar with boost's implementation, but MPI is a standard. Also, I saw Gilles comment after posting this answer...)

Upvotes: 1

Related Questions