gotqn
gotqn

Reputation: 43666

How to pass two dimension array to all processes using broadcast?

I have difficulties making MPI_Bcast function to work. I want to broadcast two dimensional array to all processes. It seems that the broadcast message is not working, because the fisrt message the slave processes are reading is the one that makes them exist their functions - when DIETAG is use te function is exit.

Could anyone tell what is the right way to pass two dimension array to slace processes using broadcast function?

#include <stdio.h>
#include <iostream>
#include <string>
#include <time.h>
#include <mpi.h>

using namespace std;

/* Global constants */

    #define MASTER 0
    #define WORKTAG 1
    #define DIETAG 2
    #define MATRIX_SIZE 100

/* Local functions */

    static void master(void);
    static void slave(void);
    static void initialize_matrix(int (*matrix)[MATRIX_SIZE]);

/* Function executed when program  is started */

    int main(int argc, char **argv)
    {
        // Declaring/Initizing local variables
        int current_rank;

        // Initialize MPI 
        MPI_Init(&argc, &argv);

        // Finding out current procces identity in the default communicator
        MPI_Comm_rank(MPI_COMM_WORLD, &current_rank);

        if (current_rank == MASTER) {
            master();
        } else {
            slave();
        }

        // Shut down MPI
        MPI_Finalize();

        return 0;
    }

/* Function executed by "master" process */

    static void master(void)
    {
        // Declaring variables
        int matrix_one[MATRIX_SIZE][MATRIX_SIZE];
        int processes_count, current_process_rank;

        // Initializing variables   
        initialize_matrix(matrix_one);
        MPI_Comm_size(MPI_COMM_WORLD, &processes_count);
        MPI_Comm_rank(MPI_COMM_WORLD, &current_process_rank);

        // this is currently not working
        MPI_Bcast(&matrix_one, MATRIX_SIZE * MATRIX_SIZE, MPI_INT, current_process_rank, MPI_COMM_WORLD);

        // Tell all the slaves to exit by sending an empty message with the DIETAG
        for (current_process_rank = 1; current_process_rank < processes_count; current_process_rank++) {
            MPI_Send(0, 0, MPI_INT, current_process_rank, DIETAG, MPI_COMM_WORLD);
        }

    }

/* Function executed by "slave" processes_count */

    static void slave(void) {

        MPI_Status status;
        int current_process_rank;
        int matrix_one[MATRIX_SIZE][MATRIX_SIZE];

        MPI_Comm_rank(MPI_COMM_WORLD, &current_process_rank);

        //received

        while(1) {

            MPI_Recv(&matrix_one, MATRIX_SIZE * MATRIX_SIZE, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, &status);

            // Check the tag of the received message
            if (status.MPI_TAG == DIETAG) {
                return;
            }

            cout<<"value"<<matrix_one[0][0];
        }
    }

/* Function for populating matrix with random numbers */

    static void initialize_matrix(int (*matrix)[MATRIX_SIZE])
    {
        int row, col;
        for (row = 0; row < MATRIX_SIZE; row++)
        {
            for (col = 0; col < MATRIX_SIZE; col++)
            {
                matrix[row][col] = rand();
            }
        }
    }

Upvotes: 2

Views: 2751

Answers (1)

Wesley Bland
Wesley Bland

Reputation: 9082

This is a common mistake for people when starting out in MPI. Collective operations should be called by all processes in the MPI communicator. They only match with other calls of the same type.

Don't think of an MPI_Bcast as a single process sending a bunch of messages to a bunch of other processes. Instead, think of it as a bunch of processes working together so that when the MPI_Bcast is over, all of the processes have the same data. This requires that they all call MPI_Bcast instead of one process broadcasting and all of the others calling MPI_Send.

There's a good tutorial on how to use simple collective functions here: http://mpitutorial.com/mpi-broadcast-and-collective-communication/

Upvotes: 4

Related Questions