Reputation: 3260
I'm trying to get an intution about how MPI works. Therefore I started with a little example:
...
MPI_Comm_rank(MPI_COMM_WORLD, &tid);
MPI_Comm_size(MPI_COMM_WORLD, &nthreads);
int message = 2;
if(tid != 0)
{
MPI_Recv(&rec, 1, MPI_INT, tid-1, 0, MPI_COMM_WORLD, &status);
printf("Process %i receive %i\n", tid, rec);
}
if(tid != nthreads-1)
{
message++;
printf("Process %i sends %i\n", tid, message);
MPI_Send(&message, 1, MPI_INT, tid+1, 0, MPI_COMM_WORLD);
}
...
That works fine despite the fact that message
does seem to increase above 3. Why is that?
Upvotes: 1
Views: 213
Reputation: 74485
I strongly suspect that what you want could be achieved by the following code:
int message = 2;
if(tid != 0)
{
MPI_Recv(&message, 1, MPI_INT, tid-1, 0, MPI_COMM_WORLD, &status);
printf("Process %i receive %i\n", tid, message);
}
if(tid != nthreads-1)
{
message++;
printf("Process %i sends %i\n", tid, message);
MPI_Send(&message, 1, MPI_INT, tid+1, 0, MPI_COMM_WORLD);
}
Thus each process (except rank 0) would receive the value from the previous rank, increment it by one and then pass it to the next rank (except the last rank). For example:
2
to rank 12
from rank 0, increments it and sends 3
to rank 23
from rank 1, increments it and sends 4
to rank 32+nthreads-1
from rank nthreads-2Upvotes: 1
Reputation: 1140
mpirun -np N
command starts N
MPI processes, not one process with N
threads. In each process you have variable message
and each process has its own variable set to 2. Next 0th process sends its message == 3
to 1st process. After it, 1st process sends his own message == 3
and 2nd catches it...
Upvotes: 2
Reputation: 50947
The MPI ranks aren't threads, they're - well - "ranks", or Processing Elements sometimes, in MPI speak, but processes generally speaking.
That would normally be a minor detail, but here I think it actually reflects the underlying confusion. If in OpenMP (say) you had the following code
int message = 0;
#pragma omp parallel default(none) shared(message)
{
int tid = omp_get_thread_num();
if (tid != nthreads-1) {
#pragma omp atomic
message++;
}
}
then at the end of the parallel section, message would in fact have been incremented OMP_NUM_THREADS-1
times.
But MPI isn't threading; when you launch your program with mpiexec
or mpirun
, n tasks are launched and each runs the exact same program in its own process -- eg, no shared variables. In particular, each of the tasks will have their own variable message
, and that variable will be incremented at most one time in the above code snippet, meaning that message
will always be 2 or 3 at the end of the posted code.
Upvotes: 2