Reputation: 4453
I have a parallel block, which spawns a certain amount of threads. All of these threads then should start a "shared" for loop which contains multiple parallel for loops. For example something like this:
// 1. The parallel region spawns a number of threads.
#pragma omp parallel
{
// 2. Each thread does something before it enters the loop below.
doSomethingOnEachThreadAsPreparation();
// 3. This loop should run by all threads synchronously; i belongs
// to all threads simultaneously
// Basically there is only one variable i. When all threads reach this
// loop i at first is set to zero.
for (int i = 0; i < 100; i++)
{
// 4. Then each thread calls this function (this happens in parallel)
doSomethingOnEachThreadAtTheStartOfEachIteration();
// 5. Then all threads work on this for loop in parallel
#pragma omp for
for (int k = 0; i < 100000000; k++)
doSomethingVeryTimeConsumingInParallel(k);
// 6. After the parallel for loop there is (always) an implicit barrier
// 7. When all threads finished the for loop they call this method in parallel.
doSomethingOnEachThreadAfterEachIteration();
// 8. Here should be another barrier. Once every thread has finished
// the call above, they jump back to the top of the for loop,
// where i is set to i + 1. If the condition for the loop
// holds, continue at 4., otherwise go to 9.
}
// 9. When the "non-parallel" loop has finished each thread continues.
doSomethingMoreOnEachThread();
}
I thought it might already be possible to implement this type of behaviour using
#pragma omp single
and a shared i
variable, but I am not certain of that anymore.
What the functions actually do is irrelevant; this is about the control flow. I added comments as to how I want it to be.
If I understand it correctly, the loop at 3.
would normally create an i
variable for each thread and the loop head is generally not executed only by a single thread. But this is what I want for this case.
Upvotes: 0
Views: 858
Reputation: 11537
You can run the for
loop in all threads. Depending on your algorithm a synchronization will probably be required either after every iteration (as below) or at the end of all iterations.
#pragma omp parallel
{
// enter parallel region
doSomethingOnEachThreadAsPreparation();
//done in // by all threads
for (int i = 0; i < 100; i++)
{
doSomethingOnEachThreadAtTheStartOfEachIteration();
# pragma omp for
// parallelize the for loop
for (int k = 0; i < 100000000; k++)
doSomethingVeryTimeConsumingInParallel(k);
// implicit barrier
doSomethingOnEachThreadAfterEachIteration();
# pragma omp barrier
// Maybe a barrier is required,
// so that all iterations are synchronous
// but if it is not required by the algorithm
// performances will be better without the barrier
}
doSomethingMoreOnEachThread();
// still in parallel
}
As pointed out by Zulan, enclosing the main for
loop by an omp single
to re-enter later a parallel section does not work, unless you use nested parallelism. In that case, threads would be recreated at every iteration and this would cause a major slow down.
omp_set_nested(1);
#pragma omp parallel
{
// enter parallel region
doSomethingOnEachThreadAsPreparation();
//done in // by all threads
# pragma omp single
// only one thread runs the loop
for (int i = 0; i < 100; i++)
{
# pragma omp parallel
{
// create a new nested parallel section
// new threads are created and this will
// certainly degrade performances
doSomethingOnEachThreadAtTheStartOfEachIteration();
# pragma omp for
// and we parallelize the for loop
for (int k = 0; i < 100000000; k++)
doSomethingVeryTimeConsumingInParallel(k);
// implicit barrier
doSomethingOnEachThreadAfterEachIteration();
}
// we leave the parallel section (implicit barrier)
}
// we leave the single section
doSomethingMoreOnEachThread();
// and we continue running in parallel
}
Upvotes: 2