Reputation: 7886
Is there any counter-indication to doing this ? Or is the behavior well specified?
#pragma omp parallel for
for(auto x : stl_container)
{
...
}
Because it seems that OpenMP specification is only valid for c++98 but I guess there might be more incompatibilities due to C++11 threads, which are not used here. I wanted to be sure, still.
Upvotes: 91
Views: 26110
Reputation: 7886
OpenMP 5.0 adds the following line on page 99, which makes a lot of range-based for loops OK !
2.12.1.3 A range-based for loop with random access iterator has a canonical loop form.
Source : https://www.openmp.org/wp-content/uploads/OpenMP-API-Specification-5.0.pdf
Upvotes: 37
Reputation: 74435
The OpenMP 4.0 specification was finalised and published several days ago here. It still mandates that parallel loops should be in the canonical form (§2.6, p.51):
for (
init-expr;
test-expr;
incr-expr)
structured-block
The standard allows for containers that provide random-access iterators to be used in all of the expressions, e.g.:
#pragma omp parallel for
for (it = v.begin(); it < v.end(); it++)
{
...
}
If you still insist on using the C++11 syntactic sugar, and if it takes a (comparatively) lot of time to process each element of stl_container
, then you could use the single-producer tasking pattern:
#pragma omp parallel
{
#pragma omp single
{
for (auto x : stl_container)
{
#pragma omp task
{
// Do something with x, e.g.
compute(x);
}
}
}
}
Tasking induces certain overhead so it would make no sense to use this pattern if compute(x);
takes very little time to complete.
Upvotes: 57