Alexandre Krabbe
Alexandre Krabbe

Reputation: 771

How to send a integer array via MPI_Send?

I'm trying to create a program in regular C that divides an integer array equally between any amount of process. For debugging purposes I'm using an integer array with 12 numbers and only 2 process so that the master process will have [1,2,3,4,5,6] and the slave1 will have [7,8,9,10,11,12]. However I'm getting an error saying: MPI_ERR_BUFFER: invalid buffer pointer.

After some research I found out that there is a function that does that (MPI_Scatter). Unfortunately, since I'm learning MPI the implementation is restricted to MPI_Send and MPI_Recv only. Anyway, both MPI_Send and MPI_Recv use a void*, and I'm sending a int* so it should work. Can anyone point out what am I doing wrong? Thank you.

int* create_sub_vec(int begin, int end, int* origin);
void print(int my_rank, int comm_sz, int n_over_p, int* sub_vec);

int main(void){

    int comm_sz;
    int my_rank;

    int vec[12] = {1,2,3,4,5,6,7,8,9,10,11,12};
    int* sub_vec = NULL;
    int n_over_p;

    MPI_Init(NULL, NULL);
    MPI_Comm_size(MPI_COMM_WORLD, &comm_sz);
    MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);    

    n_over_p = 12/comm_sz;
    printf("Process %d calcula n_over_p = %d\n", my_rank, n_over_p);

    if (my_rank != 0) {
        MPI_Recv(sub_vec, n_over_p, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
        print(my_rank, comm_sz, n_over_p, sub_vec);

    } else {

        printf("Distribuindo dados\n");
        for (int i = 1; i < comm_sz; i++) {
            sub_vec = create_sub_vec(i*n_over_p, (i*n_over_p)+n_over_p, vec);
            MPI_Send(sub_vec, n_over_p, MPI_INT, i, 0, MPI_COMM_WORLD);
        }
        printf("Fim da distribuicao de dados\n");

        sub_vec = create_sub_vec(0, n_over_p, vec);

        print(my_rank, comm_sz, n_over_p, sub_vec);
    }

    MPI_Finalize();
    return 0;

}

int* create_sub_vec(int begin, int end, int* origin){
    int* sub_vec;
    int size;
    int aux = 0;
    size = end - begin;
    sub_vec = (int*)malloc(size * sizeof(int));
    for (int i = begin; i < end; ++i) {
        *(sub_vec+aux) = *(origin+i);
        aux += 1;
    }
    return  sub_vec;
}

void print(int my_rank, int comm_sz, int n_over_p, int* sub_vec){
    printf("Process %d out of %d received sub_vecotr: [ ", my_rank, comm_sz);
    for (int i = 0; i < n_over_p; ++i)
    {
        printf("%d, ", *(sub_vec+i));
    }
    printf("]\n");
}

Upvotes: 2

Views: 10126

Answers (1)

Gilles Gouaillardet
Gilles Gouaillardet

Reputation: 8395

The issue is that sub_vec is not allocated on non zero rank. It is up to you to do that (e.g. MPI does not allocate the receive buffer).

the receive part should look like

if (my_rank != 0) {
    sub_vec = (int *)malloc(n_over_p * sizeof(int));    
    MPI_Recv(sub_vec, n_over_p, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
}

As you wrote, the natural way is via MPI_Scatter() (and once again, it is up to you to allocate the receive buffer before starting the scatter.

Upvotes: 4

Related Questions