user2743
user2743

Reputation: 1513

distributing part of a vector to slave processes

Using C++ and OpenMPI I'd like to create a vector and then in the master process distribute parts of the vector evenly to the slave processes to be processed. So if the vector has 100 integers and there are 5 slave processes then each slave process would be sent a vector of 20 integers.

Below is the code I was trying but it's not working.

#include <iostream>
#include <list>
#include "mpi.h"
#include <stdio.h>
#include <stdlib.h>
#include <vector>
#include <algorithm>
#include <map>
#include <random>

using namespace std;

int main(int argc, char **argv)
{
int rank, size, tag, rc, i;
MPI_Status status;
vector<int> vec(100);
vector<int> split_vec;

rc = MPI_Init(&argc, &argv);
rc = MPI_Comm_size(MPI_COMM_WORLD, &size);
rc = MPI_Comm_rank(MPI_COMM_WORLD, &rank);
tag=7;

random_device rd;
mt19937 gen(rd());
uniform_int_distribution<> dis;
generate(vec.begin(), vec.end(), [&](){ return dis(gen); });
// 100 ints / 5 slave processes
int split_size = vec.size() / size;
// split_size = size of vector of ints that each slave process should get
int offset = 0;
int j = 0;
int max = split_size;
if (rank==0) {
// master process
     for (int i=1; i<size; ++i){
            split_vec.clear();
            while(j < max){
                int elements = i_j + offset;
                split_vec.push_back(vec[elements]);
                j++;
            }
            max = max + split_size;
            offset = offset + split_size;
     MPI_Send(&split_vec[0], split_vec.size(), MPI_INT, i, tag, MPI_COMM_WORLD);
else{
   // slaves processes
   MPI_Recv(&split_vec[0], split_vec.size(), MPI_INT, 0, tag, MPI_COMM_WORLD, &status);
   // process the vector
}
   MPI_Finalize();
}// main

When I run the code above I get the following error:

[comp-01:7562] *** An error occurred in MPI_Recv
[comp-01:7562] *** on communicator MPI_COMM_WORLD
[comp-01:7562] *** MPI_ERR_TRUNCATE: message truncated
[comp-01:7562] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort

It seems to be working it's way through the vector<int> vec(100) vector properly, grabbing the first twenty integers, next twenty, etc. But I think it's crashing because I keep overwriting the elements in split_vec and the garage values are making it crash. When I print the vectors because I send it below is what is printed before it crashes.

0 64 0 33 0 1819240283 808662067 976302125 892679984 2121015 0 49 0 -773125744 32554 0 0 -773125760 32554 0 0

Even though there are no 0 or negative numbers in the vec vector. So I tried to fix this by adding the split_vec.clear(); code but that doesn't seem to work either.

Below is what I used to compile and run it.

mpic++ -std=c++11 prog.cpp -o prog.o
mpirun -np 6 prog.o  // 5 slave process

Upvotes: 1

Views: 244

Answers (1)

francis
francis

Reputation: 9817

The following changes are needed so as to make MPI_Recv() successful:

  • In the code you posted, split_vec.size() is 0 in MPI_Recv() since split_vec is not populated yet. Hence, MPI_Recv() will receive a message, but the vector will not be modified.

  • The receive buffer must be allocated first. In the present case, it means that split_vec.reserve(split_size); must be added prior to the call to MPI_Recv(&split_vec[0],MPI_INT,split_size,...);. Indeed, the method vector::reserve(size_type n) will increase the capacity of the vector to at least n elements. As noticed by @Zulan , using split_vec.resize(split_size); is more correct as it also updates the size of the vector.

Finally, the following code does the trick:

split_vec.resize(split_size);
MPI_Recv(&split_vec[0],MPI_INT,split_vec.size(),...);

Here is a corrected code. Notice that the indexes used to populate the array on rank 0 had to be modified. Compile it by mpiCC main.cpp -o main -std=c++11 -Wall

#include <iostream>
#include <list>
#include "mpi.h"
#include <stdio.h>
#include <stdlib.h>
#include <vector>
#include <algorithm>
#include <map>
#include <random>

using namespace std;

int main(int argc, char **argv)
{
    int rank, size, tag;
    MPI_Status status;
    vector<int> vec(100);
    vector<int> split_vec;

    MPI_Init(&argc, &argv);
    MPI_Comm_size(MPI_COMM_WORLD, &size);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    tag=7;

    random_device rd;
    mt19937 gen(rd());
    uniform_int_distribution<> dis;
    generate(vec.begin(), vec.end(), [&](){ return dis(gen); });
    // 100 ints / 5 slave processes
    int split_size = vec.size() / size;
    // split_size = size of vector of ints that each slave process should get
    int offset = 0;
    if (rank==0) {
        // master process
        for (int i=1; i<size; ++i){
            split_vec.clear();
            split_vec.reserve(split_size);
            int j=0; // added j=0
            while(j < split_size){
                int elements = j + offset;
                split_vec.push_back(vec[elements]);
                j++;
            }
            //max =  split_size; 
            offset = offset + split_size;
            MPI_Send(&split_vec[0], split_vec.size(), MPI_INT, i, tag, MPI_COMM_WORLD);
        }
    }
    else{
        // slaves processes
        split_vec.resize(split_size);
        MPI_Recv(&split_vec[0], split_vec.size(), MPI_INT, 0, tag, MPI_COMM_WORLD, &status); // until receive is complete, split_vec.size()==0
    }
    MPI_Finalize();
}// main

Finally, you are going to enjoy the function MPI_Scatter() which exactly does what you want to do!

#include <iostream>
#include <list>
#include "mpi.h"
#include <stdio.h>
#include <stdlib.h>
#include <vector>
#include <algorithm>
#include <map>
#include <random>

using namespace std;

int main(int argc, char **argv)
{
    int rank, size;

    vector<int> vec(100);
    vector<int> split_vec;

    MPI_Init(&argc, &argv);
    MPI_Comm_size(MPI_COMM_WORLD, &size);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);

    random_device rd;
    mt19937 gen(rd());
    uniform_int_distribution<> dis;
    generate(vec.begin(), vec.end(), [&](){ return dis(gen); });
    // 100 ints / 5 slave processes
    int split_size = vec.size() / size;
    // split_size = size of vector of ints that each slave process should get


    split_vec.resize(split_size);
    MPI_Scatter(&vec[0],split_size,MPI_INT,&split_vec[0],split_size,MPI_INT,0,MPI_COMM_WORLD);

    MPI_Finalize();
}// main

Upvotes: 1

Related Questions