Reputation: 1020
I have a code that calculates heat transfer in some number of conductors. What happens in one conductor doesn't impact others in the model. So I'm trying to make the solution of these conductors run in parallel, with each processor taking on a different set of conductors. Now, I thought that the code would work by running on one core until it got to this loop where I put the command:
MPI_INIT
Then run this section of code on however many cores I requested and then go back to running on one core after the command:
MPI_FINALIZE
is encountered. But what I'm seeing is that the input file is read in by both cores (if I use 2 cores), and all the outputs are printed twice also. Does MPI not work as I thought? If not, then how can I achieve the behavior I want? I only want the code running on multiple cores for that one segment of the code and not in any other subroutines or parts of the code outside of MPI_INIT and MPI_FINALIZE.
Upvotes: 1
Views: 3987
Reputation: 50927
This is a common misunderstanding, particularly among people who have experience with something like OpenMP, where threads are forked and joined at various points in the program.
In MPI, MPI_Init
and MPI_Finalize
initialize and finalize your MPI library; that's it. While the standard is purposefully silent on what happens before Init and after Finalize, as a practical matter your mpirun
or mpiexec
command generally does the creating and launching of the processes. If you type
mpirun -np 4 hostname
for instance, four processes are launched, each of which runs the hostname
command -- which is very definitely not an MPI executable, and doesn't have any MPI_Init
or MPI_Finalize
calls in it. Each of those processes run the executable, start to end, so you get four outputs. It's the mpirun (or mpiexec) which launches the processes, not any MPI function calls inside the program.
In your program, then, the entire program gets run by as many processes as you've requested.
Upvotes: 9
Reputation: 4671
I don't think I understand the question entirely, but the first thing to take note of is that MPI can be initialized at most once. So repeatedly doing
MPI_Init
...
MPI_Finalize
is not allowed. Besides, MPI_Init
and MPI_Finalize
are expensive operations; you wouldn't want to call them in a loop.
MPI was originally designed around a static process model, meaning that a set of processes starts up, does the work, and exits. It sounds like you wish to change the number of processes at runtime. This is possible in MPI-2 (see MPI_Comm_spawn
).
On the other hand, continually starting up and shutting down processes will be slow. Just because a process has called MPI_Init
doesn't mean it has to participate in all communication. Here is how I would approach the problem:
MPI_Init
at the beginning of the program on all processes, even those that will only work locally.MPI_Comm_create
, and use it for all communication instead of MPI_COMM_WORLD
.MPI_Finalize
.Upvotes: 3