Reputation: 1208
The problem is gravity interraction. There are N particles in a system and M processes. I want to calculate new positions for particles(there are 11 parameters in all) in individual block in the current process and then broadcast new data to all the other processes. Here is my code:
double * particles;
...
int startForProcess = numberOfParticlesPerThread * 11 * rank;
int endForProcess = startForProcess + numberOfParticlesPerThread * 11;
calculateNewPosition(particles, startForProcess, endForProcess, nParticles, rank);
MPI_Barrier(MPI_COMM_WORLD);
MPI_Bcast(particles+startForProcess, endForProcess - startForProcess, MPI_DOUBLE, rank, MPI_COMM_WORLD);
Unfortunately, in each thread I can see changes which were made only in this thread. There is no communication between processes.
Tell me please, what am I doing wrong?
Upvotes: 0
Views: 2699
Reputation: 4671
MPI_Bcast
will broadcast data from the root to the rest of the ranks in the communicator. All processes in an MPI_Bcast
call must use the same root parameter, since they are all receiving data from the same process.
I think the function you are looking for is MPI_Allgather
. It will gather and propagate the updated data among all processes.
The code could look something like this:
double * particles;
double * my_particles;
...
int startForProcess = numberOfParticlesPerThread * 11 * rank;
int endForProcess = startForProcess + numberOfParticlesPerThread * 11;
calculateNewPosition(particles, startForProcess, endForProcess, nParticles, rank);
memcpy(my_particles, particles + startForProcess * sizeof(double),
(endForProcess - startForProcess) * sizeof(double));
MPI_Allgather(my_particles, endForProcess - startForProcess, MPI_DOUBLE,
particles, endForProcess - startForProcess, MPI_DOUBLE, MPI_COMM_WORLD);
As an aside, you don't need MPI_Barrier
before a collective call, unless of course you need the synchronization for some other reason.
Upvotes: 4