John Smith
John Smith

Reputation: 97

OpenMPI doesn't send data from an array

I am trying to parallelize a grayscale filter for BMP image, my function get stuck when trying to send data from a pixel array.

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>
#include "mpi.h"

#define MASTER_TO_SLAVE_TAG 1 //tag for messages sent from master to slaves
#define SLAVE_TO_MASTER_TAG 10 //tag for messages sent from slaves to master

#pragma pack(1)

struct image {
       struct fileHeader fh;
       struct imageHeader ih;
       pixel *array;
};

struct fileHeader {
       //blablabla...      
};

struct imageHeader {
       //blablabla...
};

typedef struct
{
        unsigned char R;
        unsigned char G;
        unsigned char B;
}pixel;


void grayScale_Parallel(struct image *im, int size, int rank)
{
     int i,j,lum,aux,r;
     pixel tmp;

     int total_pixels = (*im).ih.width * (*im).ih.height;
     int qty = total_pixels/(size-1);
     int rest = total_pixels % (size-1);
     MPI_Status status;

     //printf("\n%d\n", rank);

     if(rank == 0)
     {
         for(i=1; i<size; i++){
         j = i*qty - qty;
         aux = j;

         if(rest != 0 && i==size-1) {qty=qty+rest;} //para distrubuir toda la carga
         printf("\nj: %d  qty: %d  rest: %d\n", j, qty, rest);

         //it gets stuck  here,it doesn't send the data
         MPI_Send(&(*im).array[j], qty*3, MPI_BYTE, i, MASTER_TO_SLAVE_TAG, MPI_COMM_WORLD);
         MPI_Send(&aux, 1, MPI_INT, i, MASTER_TO_SLAVE_TAG+1, MPI_COMM_WORLD);
         MPI_Send(&qty, 1, MPI_INT, i, MASTER_TO_SLAVE_TAG+2, MPI_COMM_WORLD);

         printf("\nSending to node=%d, sender node=%d\n", i, rank);
        }

     }
     else
     {
    MPI_Recv(&aux, 1, MPI_INT, MPI_ANY_SOURCE, MASTER_TO_SLAVE_TAG+1, MPI_COMM_WORLD,&status);
    MPI_Recv(&qty, 1, MPI_INT, MPI_ANY_SOURCE, MASTER_TO_SLAVE_TAG+2, MPI_COMM_WORLD,&status);

    pixel *arreglo = (pixel *)calloc(qty, sizeof(pixel));
    MPI_Recv(&arreglo[0], qty*3, MPI_BYTE, MPI_ANY_SOURCE, MASTER_TO_SLAVE_TAG, MPI_COMM_WORLD,&status);
        //PROCESS RECEIVED PIXELS...
            //SEND to ROOT PROCESS

     }


    if (rank==0){
        //RECEIVE DATA FROM ALL PROCESS
    }
}


int main(int argc, char *argv[])
{    
    int rank, size;

    MPI_Init(&argc, &argv);
    MPI_Comm_size(MPI_COMM_WORLD, &size);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    MPI_Status status;

    int op=1;
    char filename_toload[50];
    int bright_number=0;
    struct image image2;

    if (rank==0)
    {
    printf("File to load: \n");
    scanf("%s", filename_toload);
    loadImage(&image2, filename_toload);
    }

    while(op != 0)
    {
        if (rank==0)
        {
        printf("Welcome to example program!\n\n");
        printf("\t1.- GrayScale Parallel Function\n");
        printf("\t2.- Call another Function\n");
        printf("\t0.- Exit\n\t");

        printf("\n\n\tEnter option:");
        scanf("%d", &op);
        }

        //Broadcast the user's choice to all other ranks
        MPI_Bcast(&op, 1, MPI_INT, 0, MPI_COMM_WORLD);

        switch(op)
        {
            case 1:
                    grayScale_Parallel(&image2, size, rank);
                    MPI_Barrier(MPI_COMM_WORLD);
                    printf("GrayScale applied successfully!\n\n");
                    break;
            case 2:
                    function_blabla();
                    printf("Function called successfully\n\n");
                    break;
        }
    }

    MPI_Finalize();
    return 0;
}

I think the MPI_Send function can't read the array of pixels, but is strange because i can print the pixels.

Any idea?

Upvotes: 0

Views: 889

Answers (2)

xeroqu
xeroqu

Reputation: 435

To elaborate more on Soravux's answer, you should change the order of your MPI_Send calls (note the changed MASTER_TO_SLAVE_TAGs) as follows to avoid deadlocks:

MPI_Send(&aux, 1, MPI_INT, i, MASTER_TO_SLAVE_TAG, MPI_COMM_WORLD);
MPI_Send(&qty, 1, MPI_INT, i, MASTER_TO_SLAVE_TAG+1, MPI_COMM_WORLD);
MPI_Send(&(*im).array[j], qty*3, MPI_BYTE, i, MASTER_TO_SLAVE_TAG+2, MPI_COMM_WORLD);

These calls need to be matched by the following sequence of MPI_Recv calls

MPI_Recv(&aux, 1, MPI_INT, MPI_ANY_SOURCE, MASTER_TO_SLAVE_TAG, MPI_COMM_WORLD,&status);
MPI_Recv(&qty, 1, MPI_INT, MPI_ANY_SOURCE, MASTER_TO_SLAVE_TAG+1, MPI_COMM_WORLD,&status);

pixel *arreglo = (pixel *)calloc(qty, sizeof(pixel));
MPI_Recv(&arreglo[0], qty*3, MPI_BYTE, MPI_ANY_SOURCE, MASTER_TO_SLAVE_TAG+2, MPI_COMM_WORLD,&status);

Hope this answers your question.

Upvotes: 1

Soravux
Soravux

Reputation: 9983

The order in which you call MPI_Send and MPI_Recv is important. You must ensure your calls are always in the same order since these functions are blocking. A call to MPI_Send will not return as long as its corresponding (same tag) MPI_Recv counterpart is not executed on the destination. This may cause deadlocks otherwise.

Upvotes: 0

Related Questions