Reputation: 295
Is it possible in Pybind11 to use mpi4py on the Python side and then to hand over the communicator to the C++ side?
If so, how would it work?
If not, is it possible for example with Boost? And if so, how would it be done?
I searched the web literally for hours but didn't find anything.
Upvotes: 5
Views: 1747
Reputation: 845
Passing an mpi4py communicator to C++ using pybind11 can be done using the mpi4py C-API. The corresponding header files can be located using the following Python code:
import mpi4py
print(mpi4py.get_include())
To convenietly pass communicators between Python and C++, a custom pybind11 type caster can be implemented. For this purpose, we start with the typical preamble.
// native.cpp
#include <pybind11/pybind11.h>
#include <mpi.h>
#include <mpi4py/mpi4py.h>
namespace py = pybind11;
In order for pybind11 to automatically convert a Python type to a C++ type,
we need a distinct type that the C++ compiler can recognise. Unfortunately,
the MPI standard does not specify the type for MPI_comm
. Worse, in
common MPI implementations MPI_comm
can be defined as int
or void*
which the C++ compiler cannot distinguish from regular use of these types.
To create a distinct type, we define a wrapper class for MPI_Comm
which
implicitly converts to and from MPI_Comm
.
struct mpi4py_comm {
mpi4py_comm() = default;
mpi4py_comm(MPI_Comm value) : value(value) {}
operator MPI_Comm () { return value; }
MPI_Comm value;
};
The type caster is then implemented as follows:
namespace pybind11 { namespace detail {
template <> struct type_caster<mpi4py_comm> {
public:
PYBIND11_TYPE_CASTER(mpi4py_comm, _("mpi4py_comm"));
// Python -> C++
bool load(handle src, bool) {
PyObject *py_src = src.ptr();
// Check that we have been passed an mpi4py communicator
if (PyObject_TypeCheck(py_src, &PyMPIComm_Type)) {
// Convert to regular MPI communicator
value.value = *PyMPIComm_Get(py_src);
} else {
return false;
}
return !PyErr_Occurred();
}
// C++ -> Python
static handle cast(mpi4py_comm src,
return_value_policy /* policy */,
handle /* parent */)
{
// Create an mpi4py handle
return PyMPIComm_New(src.value);
}
};
}} // namespace pybind11::detail
Below is the code of an example module which uses the type caster. Note, that
we use mpi4py_comm
instead of MPI_Comm
in the function definitions
exposed to pybind11. However, due to the implicit conversion, we can use
these variables as regular MPI_Comm
variables. Especially, they can be
passed to any function expecting an argument of type MPI_Comm
.
// recieve a communicator and check if it equals MPI_COMM_WORLD
void print_comm(mpi4py_comm comm)
{
if (comm == MPI_COMM_WORLD) {
std::cout << "Received the world." << std::endl;
} else {
std::cout << "Received something else." << std::endl;
}
}
mpi4py_comm get_comm()
{
return MPI_COMM_WORLD; // Just return MPI_COMM_WORLD for demonstration
}
PYBIND11_MODULE(native, m)
{
// import the mpi4py API
if (import_mpi4py() < 0) {
throw std::runtime_error("Could not load mpi4py API.");
}
// register the test functions
m.def("print_comm", &print_comm, "Do something with the mpi4py communicator.");
m.def("get_comm", &get_comm, "Return some communicator.");
}
The module can be compiled, e.g., using
mpicxx -O3 -Wall -shared -std=c++14 -fPIC \
$(python3 -m pybind11 --includes) \
-I$(python3 -c 'import mpi4py; print(mpi4py.get_include())') \
native.cpp -o native$(python3-config --extension-suffix)
and tested using
import native
from mpi4py import MPI
import math
native.print_comm(MPI.COMM_WORLD)
# Create a cart communicator for testing
# (MPI_COMM_WORLD.size has to be a square number)
d = math.sqrt(MPI.COMM_WORLD.size)
cart_comm = MPI.COMM_WORLD.Create_cart([d,d], [1,1], False)
native.print_comm(cart_comm)
print(f'native.get_comm() == MPI.COMM_WORLD '
f'-> {native.get_comm() == MPI.COMM_WORLD}')
The output should be:
Received the world.
Received something else.
native.get_comm() == MPI.COMM_WORLD -> True
Upvotes: 9
Reputation: 2830
This is indeed possible. As was pointed out in the comments by John Zwinck, MPI_COMM_WORLD
will automatically point to the correct communicator, so nothing has to be passed from python to the C++ side.
First we have a simple pybind11 module that does expose a single function which simple prints some MPI information (taken from one of the many online tutorials). To compile the module see here pybind11 cmake example.
#include <pybind11/pybind11.h>
#include <mpi.h>
#include <stdio.h>
void say_hi()
{
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
int world_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
char processor_name[MPI_MAX_PROCESSOR_NAME];
int name_len;
MPI_Get_processor_name(processor_name, &name_len);
printf("Hello world from processor %s, rank %d out of %d processors\n",
processor_name,
world_rank,
world_size);
}
PYBIND11_MODULE(mpi_lib, pybind_module)
{
constexpr auto MODULE_DESCRIPTION = "Just testing out mpi with python.";
pybind_module.doc() = MODULE_DESCRIPTION;
pybind_module.def("say_hi", &say_hi, "Each process is allowed to say hi");
}
Next the python side. Here I reuse the example from this post: Hiding MPI in Python and simply put in the pybind11 library. So first the python script that will call the MPI python script:
import sys
import numpy as np
from mpi4py import MPI
def parallel_fun():
comm = MPI.COMM_SELF.Spawn(
sys.executable,
args = ['child.py'],
maxprocs=4)
N = np.array(0, dtype='i')
comm.Reduce(None, [N, MPI.INT], op=MPI.SUM, root=MPI.ROOT)
print(f'We got the magic number {N}')
And the child process file. Here we simply call the library function and it just works.
from mpi4py import MPI
import numpy as np
from mpi_lib import say_hi
comm = MPI.Comm.Get_parent()
N = np.array(comm.Get_rank(), dtype='i')
say_hi()
comm.Reduce([N, MPI.INT], None, op=MPI.SUM, root=0)
The end result is:
from prog import parallel_fun
parallel_fun()
# Hello world from processor arch_zero, rank 1 out of 4 processors
# Hello world from processor arch_zero, rank 2 out of 4 processors
# Hello world from processor arch_zero, rank 0 out of 4 processors
# Hello world from processor arch_zero, rank 3 out of 4 processors
# We got the magic number 6
Upvotes: 7