Reputation: 583
I want to parallelize an numerical integration function. I want to use this function in the middle of calculation. The work before should be done in root process. Is this possible to do in MPI?
double integral_count_MPI(double (*function)(double) , double beginX, double endX, int count)
{
double step, result;
int i;
if (endX - beginX <= 0) return 0;
step = (endX - beginX) / count;
result = 0;
double *input = (double*)malloc((count+1) *sizeof(double));
for (i = 0; i <= count; i ++)
{
input[i] = beginX + i*step;
}
// Calculate and gather
}
EDIT
algorithm:
1 process calculation;
while:
1 process calculation;
integration very complex function with many processes;
1 process calculation;
end while;
1 process calculation;
Upvotes: 1
Views: 2152
Reputation: 74485
MPI provides various means to build libraries that use it "behind the scenes". For starters, you can initialise MPI on demand. MPI-2 modified the requirements for calling MPI_Init
so every compliant implementation should be able to correctly initialise with NULL
arguments to MPI_Init
(because the actual program arguments might not be available to the library). Since MPI should only be initialised once, the library must check if it was already initialised by calling MPI_Initialized
. The code basically looks like this:
void library_init(void)
{
int flag;
MPI_Initialized(&flag);
if (!inited)
{
MPI_Init(NULL, NULL);
atexit(library_onexit);
}
}
The initialisation code also registers an exit handler by calling atexit()
from the C standard library. Within this exit handler it finalises the MPI library if it not already finalised. Failure to do so might result in mpiexec
terminating the whole MPI job with a message that at least one process has exited without finalising MPI:
void library_onexit(void)
{
int flag;
MPI_Finalized(&flag);
if (!flag)
MPI_Finalize();
}
This arrangement allows you to write your integral_count_MPI
function simply like:
double integral_count_MPI(...)
{
library_init();
... MPI computations ...
}
integral_count_MPI
will demand-initialise the MPI library on the first call. Later calls will not result in reinitialisation because of the way library_init
is written. Also no explicit finalisation is necessary - the exit handler will take care.
Note that you will still need to launch the code via the MPI process launcher (mpirun
, mpiexec
, etc.) and will have to be careful with doing I/O, since the serial part of the code would execute in each instance. Many MPI-enabled libraries provide their own I/O routines for that purpose that filter on the process rank and allow only rank 0 to perform the actual I/O. You can also use the dynamic process management facilities of MPI to spawn additional processes on demand, but that would require that you abstract a huge portion of the process management into the library that performs the integration, which would make it quite complex (and the code of your main program would look awkward).
Upvotes: 3
Reputation: 4117
You can find the MPI documentation here
Basically, the logic is the following:
int main()
{
MPI_INIT(...);
MPI_Comm_size(...); //get the number of processes
MPI_Comm_rank(...); //get my rank
if (rank == 0) //master process
{
for (i = 1; i < n; i++)
MPI_Send(...) //Send interval data specific to i process
double result = 0;
for (i = 1; i < n; i++)
{
double part_result;
MPI_Recv(&part_result, ...) //Receive partial results from slaves
result += part_result;
}
// Print result
}
else //slave process
{
MPI_Recv(...) //Receive interval data from master (rank 0 process)
double result = integral_count_MPI(...);
MPI_Send(...) // Send results to master
}
MPI_FINALIZE(...);
}
Upvotes: 2