Sidny Sho
Sidny Sho

Reputation: 41

How to send GMP or MPFR types with MPI

I trying to send variables of type mpfr_t using MPI_Scatter. For example:

mpfr_t *v1 = new mpfr_t[10];  
mpfr_t *v2 = new mpfr_t[10];   
MPI_Scatter(v1, 5, MPI_BYTE, v2, 5, MPI_BYTE, 0, MPI_COMM_WORLD ); 
for (int i = 0; i < 5; i++) 
    mpfr_printf("value rank %d -  %RNf \n", ProcRank, v2[i]);

It prints:

value rank 0 - nan
value rank 0 - nan
value rank 0 - nan
.....
value rank 1 - nan
value rank 0 - nan

But it's work of MPI_Bcast. What I do wrong. Code C/C++, MPI lib is OpenMPI-1.6.

Upvotes: 4

Views: 1070

Answers (2)

IngoMeyer
IngoMeyer

Reputation: 463

As @LaryxDecidua already pointed out, MPFR numbers use dynamic memory on the heap and thus cannot be used in MPI operations directly. However, if you can use MPFR version 4.0.0 or later, you can use the MPFR function mpfr_fpif_export to serialize numbers to a linear memory buffer and send this buffer with MPI instead. The receiving end can restore the original numbers by calling mpfr_fpif_import. Both functions operate on FILE * handles, so you also need to use open_memstream and / or fmemopen to map the file handle to a memory buffer. Unfortunately, the length of the serialized numbers does not only depend on the precision but also on the value (e.g. the value 0 occupies fewer bytes than other numbers), so group communications like MPI_Scatter or MPI_Gather which need fixed buffer sizes for every rank won't work. Use MPI_Send and MPI_recv pairs instead.

This is a complete C++17 example with mpreal which provides a nice interface to MPFR numbers for C++ code bases:

#include <cstdio>
#include <iostream>
#include <mpi.h>
#include <mpreal.h>
#include <numeric>
#include <optional>


int main(int argc, char **argv) {
    MPI_Init(&argc, &argv);
    int world_rank;
    MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
    int world_size;
    MPI_Comm_size(MPI_COMM_WORLD, &world_size);


    std::vector<mpfr::mpreal> real_vec{world_rank, M_PI};

    // Serialize the mpreal vector to memory (send_buf)
    char *send_buf;
    size_t send_buf_size;
    FILE *real_stream = open_memstream(&send_buf, &send_buf_size);
    for (auto &real : real_vec) {
        mpfr_fpif_export(real_stream, real.mpfr_ptr());
    }
    fclose(real_stream);

    // Gather the buffer length of all processes
    std::optional<std::vector<size_t>> send_buf_size_vec;
    if (world_rank == 0) {
        send_buf_size_vec = std::vector<size_t>(world_size);
    }
    MPI_Gather(&send_buf_size, 1, MPI_UNSIGNED_LONG, (send_buf_size_vec ? send_buf_size_vec->data() : nullptr), 1,
               MPI_UNSIGNED_LONG, 0, MPI_COMM_WORLD);

    if (world_rank != 0) {
        // Send the serialized mpreal vector to rank 0
        MPI_Send(send_buf, send_buf_size, MPI_BYTE, 0, 0, MPI_COMM_WORLD);
    } else {
        // Create a recv buffer which can hold the data from all processes
        size_t recv_buf_size = std::accumulate(send_buf_size_vec->begin(), send_buf_size_vec->end(), 0UL);
        std::vector<char> recv_buf(recv_buf_size);
        auto all_buf_it = recv_buf.begin();

        // Directly copy the send buffer of process 0
        std::memcpy(&*all_buf_it, send_buf, send_buf_size);
        all_buf_it += (*send_buf_size_vec)[0];

        // Receive serialized numbers from all other ranks
        MPI_Status status;
        for (int i = 1; i < world_size; ++i) {
            MPI_Recv(&*all_buf_it, (*send_buf_size_vec)[i], MPI_BYTE, i, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
            all_buf_it += (*send_buf_size_vec)[i];
        }

        // Import all mpreals from the receive buffer
        real_stream = fmemopen(recv_buf.data(), recv_buf.size(), "rb");
        std::vector<mpfr::mpreal> all_real_vec(world_size * real_vec.size());
        for (auto &real : all_real_vec) {
            mpfr_fpif_import(real.mpfr_ptr(), real_stream);
        }
        fclose(real_stream);

        // Print all received values
        std::cout << "Read values:" << std::endl;
        for (auto &real : all_real_vec) {
            std::cout << real << std::endl;
        }
    }

    MPI_Finalize();

    return 0;
}

Upvotes: 1

timos
timos

Reputation: 2707

You specified the sendcount as 5 and the datatype as MPI_BYTE. This seems odd. If you want to use MPI_BYTE and you want to send 5 mpfr_t values then specify a sendcount of 5*sizeof(mpfr_t). Another option would be to create your own MPI derived datatype (if you want to get rid of the sizeof()).

Upvotes: 1

Related Questions