Reputation: 2110
I use a 2-dimensional vector. I have two operations :
Add item to vector with(i,j)
How can I parallize this code? If I add only #pragma omp parallel for shared(tempVector, objVector)
, Can OpenMP prevent date race?
vector < myObject > objVector;
vector< vector <int> > tempVector(4);
for(int i = 0; i < objVector.size(); i++) {
int x = objVector[i].X,
y = objVector[i].Y;
if(x <= Xmiddle+DIAMETER && y <= Ymiddle+DIAMETER)
{
tempVector[0].push_back(i);
}
if(x >= Xmiddle-DIAMETER && y <= Ymiddle+DIAMETER)
{
tempVector[1].push_back(i);
}
if(x <= Xmiddle+DIAMETER && y >= Ymiddle-DIAMETER)
{
tempVector[2].push_back(i);
}
if(x >= Xmiddle-DIAMETER && y >= Ymiddle-DIAMETER)
{
tempVector[3].push_back(i);
}
}
Upvotes: 2
Views: 1371
Reputation: 1571
Unfortunately, OpenMP cannot prevent a data race in this case. The shared clause allows all threads to see the vector variables, but it does nothing to order their accesses. Vector's push_back function is not thread-safe since it could cause the vector's underlying storage to be reallocated (to grow).
This code can be parallelized, but how well it will scale will depend on how much implementation effort you are willing to put in. To decide on an appropriate amount of effort, determine how much time this piece takes of your entire application. Here are two (of the many possible) ways to parallelize your problem:
Upvotes: 0
Reputation: 366
You need to use the #critical directive to properly access the shared variables:
#include <omp.h>
main()
{
int x;
x = 0;
#pragma omp parallel shared(x)
{
#pragma omp critical
x = x + 1;
} /* end of parallel section */
}
Example taken from: https://computing.llnl.gov/tutorials/openMP/#CRITICAL
If I was you, I would be think at something different (unfortunately you cannot use #reduction in this case, but you can definitely reshuffle the code to achieve the same result).
Upvotes: 1