NewToAndroid
NewToAndroid

Reputation: 581

Using pthreads with MPICH

I am having trouble using pthreads in my MPI program. My program runs fine without involving pthreads. But I then decided to execute a time-consuming operation in parallel and hence I create a pthread that does the following (MPI_Probe, MPI_Get_count, and MPI_Recv). My program fails at MPI_Probe and no error code is returned. This is how I initialize the MPI environment

MPI_Init_thread(&argc, &argv, MPI_THREAD_MULTIPLE, &provided_threading_support);

The provided threading support is '3' which I assume is MPI_THREAD_SERIALIZED. Any ideas on how I can solve this problem?

Upvotes: 1

Views: 196

Answers (2)

Hristo Iliev
Hristo Iliev

Reputation: 74385

The provided threading support is '3' which I assume is MPI_THREAD_SERIALIZED.

The MPI standard defines thread support levels as named constants and only requires that their values are monotonic, i.e. MPI_THREAD_SINGLE < MPI_THREAD_FUNNELED < MPI_THREAD_SERIALIZED < MPI_THREAD_MULTIPLE. The actual numeric values are implementation-specific and should never be used or compared against.

MPI communication calls by default never return error codes other than MPI_SUCCESS. The reason for that is, MPI calls the communicator's error handler before an MPI call returns and all communicators are initially created with MPI_ERRORS_ARE_FATAL installed as their error handler. That error handler terminates the program and usually prints some debugging information, e.g. the reason for the failure. Both MPICH (and its countless variants) and Open MPI produce quite elaborate reports on what led to the termination.

To enable user error handling on communicator comm, you should make the following call:

MPI_Comm_set_errhandler(comm, MPI_ERRORS_RETURN);

Watch out for the error codes returned - their numerical values are also implementation-specific.

Upvotes: 2

Sneftel
Sneftel

Reputation: 41474

If your MPI implementation isn't willing to give you MPI_THREAD_MULTIPLE, there's three things you can do:

  1. Get a new MPI implementation.
  2. Protect MPI calls with a critical section.
  3. Cut it out with the threading thing.

I would suggest #3. The whole point of MPI is parallelism -- if you find yourself creating multiple threads for a single MPI subprocess, you should consider whether those threads should have been independent subprocesses to begin with.

Particularly with MPI_THREAD_MULTIPLE. I could maybe see a use for MPI_THREAD_SERIALIZED, if your threads are sub-subprocess workers for the main subprocess thread... but MULTIPLE implies that you're tossing data around all over the place. That loses you the primary convenience offered by MPI, namely synchronization. You'll find yourself essentially reimplementing MPI on top of MPI.

Okay, now that you've read all that, the punchline: 3 is MPI_THREAD_MULTIPLE. But seriously. Reconsider your architecture.

Upvotes: -1

Related Questions