Reputation: 41
I code my MPI program using C. Assume I use 10 cores. I want to pass glob
to all the other cores, but glob
in the following code is executed by one of the cores, which is illegal and the code will be dead.
for ( i = 0; i < n_loc; i++ ) { /*n_loc is the local number based on each core*/
if ( a_loc[i] == glob ) { /*a_loc is a local array based on each core*/
glob += 1;
MPI_Bcast ( &glob, 1, MPI_INT, MYID, MPI_COMM_WORLD );
}
}
So what can I do to solve this problem, the global variable is changed by one of the 10 cores but I want to inform the other 9 cores?
Upvotes: 0
Views: 583
Reputation: 391
Remember that MPI communication always requires you to call the appropriate MPI function on all your ranks. MPI_Bcast
is the correct function to use in this circumstance, but you need to call it on all ranks. Check the documentation at https://www.mpich.org/static/docs/v3.1/www3/MPI_Bcast.html and you will see that the "root" argument to MPI_Bcast determines which rank the value is copied from to all the others.
An example of correct MPI_Bcast usage:
#include "stdio.h"
#include "mpi.h"
int main(){
MPI_Init(NULL, NULL);
int comm_size;
MPI_Comm_size(MPI_COMM_WORLD, &comm_size);
int rank; // Get the number of the rank
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
int my_data = rank; // Set mydata = rank -- will be different on all ranks
for(int i=0; i < comm_size; i++){
if(rank == i){
printf("Rank %i: Value %i\n", rank, my_data);
}
MPI_Barrier(MPI_COMM_WORLD);
}
// Communicate the value of my_data from rank 0 to all other ranks
int root_rank = 0;
if(rank == root_rank){
printf("Broadcasting from rank %i:\n", root_rank);
}
MPI_Bcast(&my_data, 1, MPI_INT, root_rank, MPI_COMM_WORLD);
// my_data is now 0 on all ranks
for(int i=0; i < comm_size; i++){
if(rank == i){
printf("Rank %i: Value %i\n", rank, my_data);
}
MPI_Barrier(MPI_COMM_WORLD);
}
return 0;
}
Upvotes: 1