Reputation: 15
I have a parallel block containing 2 parallel for-loops:
int i, j;
#pragma omp parallel
{
#pragma omp for
for(i=0; i < foo; i++)
work();
#pragma omp for private(j)
for(i=0; i < foo; i++)
for(j=0, j < foo; j++)
work();
}
If I were to make it private like this:
int i, j;
#pragma omp parallel private(i)
{
#pragma omp for
for(i=0; i < foo; i++)
work();
#pragma omp for private(j)
for(i=0; i < foo; i++)
for(j=0, j < foo; j++)
work();
}
Then I have NUM_THREADS amount of i copies. Will openMP still be able to schedule my threads based on i in parallel for loops and how? If I don't make it private that way (see first code example) then what behaviour can I expect from i between the two for-loops?
It is not a duplicate because I know you can usually let the parralel loop implicitly create a private version of i but I am more concerned about whether that works as expected with a variable that has been private before or has even been worked on before while being temporarily private.
Upvotes: 1
Views: 1419
Reputation: 33659
To answer your question
what behaviour can I expect from i between the two for-loops?
In your first example i
is only private in the work sharing region (in the for loops). Between the two work sharing regions it's still shared. In the second example since you declared i
private for the parallel region it's private everywhere.
The is easy to show.
i = 1;
#pragma omp parallel
{
#pragma omp for
for(i=0; i<10; i++);
i = 10;
}
printf("%d\n", i);
This prints 10
because i
is shared except in the work sharing region and every thread writes 10
to the shared i
.
However,
i = 1;
#pragma omp parallel private(i)
{
#pragma omp for
for(i=0; i<10; i++);
i = 10;
}
printf("%d\n", i);
prints 1
because i
is private in the entire parallel region and does not modify the i
outside of the parallel region.
In your current code since you only use i
in the work sharing region then it makes no difference but if used i
not in a work sharing region between the loops it could make a difference. This could lead to a subtle bug. Since you only use i
in the work sharing region then I would suggest declaring i
in the work sharing region using for(int i=0; ...
or declare it private for the whole region. The same goes for j
.
Upvotes: 1
Reputation: 9781
You second for-loop will not compile unless you change your code as
#pragma omp parallel
{
#pragma omp for
for(i=0; i < foo; i++)
work();
#pragma omp for private(j)
for(i=0; i < foo; i++)
for(j=0, j < foo; j++)
work();
}
and
int i, j;
#pragma omp parallel private(i)
{
#pragma omp for
for(i=0; i < foo; i++)
do();
#pragma omp for private(j)
for(i=0; i < foo; i++)
for(j=0, j < foo; j++)
work();
}
It is now clear what scope the private i
is.
After the modification, all of the 4 for-loops will work as expected.
Will openMP still be able to schedule my threads based on i in parallel for loops and how?
Yes, at each #pragma omp for
, the private i
s will be initialized properly and used in the parallel for.
If I don't make it private that way (see first code example) then what behavior can I expect from
i
between the two for-loops?
The index in parallel for defined by #pragma omp for
will always be private even if you declare it as shared. So your first code example still work.
Actually it doesn't matter whether you declare i
as private or not according to this answer.
https://stackoverflow.com/a/37845938/1957265
Upvotes: 0