Martin
Martin

Reputation: 338

Sharing information via (c++) openmp between different threads

I am relatively new to parallel programming and want to accomplish the following task in c++ by openmp.

I have some (lets say 4) relative complex objects/calculations to do. All of them are similar, but too complex to parallelize each of them (so they run serial). So my idea was to use a different thread/cpu for each of them, meaning I want to to spread the calculations over my cores. Though this might not be the most efficient usage of parallelism in this context, it might be the easiest to achieve (because of the high complexity of each calculation).

While this would work by

#pragma omp parallel
{
    #pragma omp for
    for(int i = 0; i < 4; i++)
    {
       obj[i].calculate();    
    }
}

I want to exchange further information between this objects, for example an integer (or a more complex object) "a" should be modified during each calculation (though I can not forecast when and how often, but especially mostly more then once). If it is modified, the information needs to be incorporated into each other calculation as well. While the specific exchange of the information is (again) relatively complex, this is done by the "calculate" methods (implicitly) as well. In general this should look like above, with the additional integer "a", which is written on and read from by all the calculation methods:

int a;
#pragma omp parallel
{
   #pragma omp for
   for(int i = 0; i < 4; i++)
   {
       obj[i].calculate();    
   }
}

So my question is, how can i prevent a "data race" on "a"? Meaning how can I generate an object, which may only be accessed by one thread at each time without going into details with in the "calculation" methods themselves? Does openmp offer this functionality, and if not, which library does?

Best regards and thanks in advance!

Upvotes: 3

Views: 1073

Answers (2)

Tudor
Tudor

Reputation: 62439

Of course you realize that the method calculate has no access to variable a in the code you posted. If you want to work like this you can write your calculation function inline and use a critical section whenever you are modifying a:

int a;
#pragma omp parallel
{
   #pragma omp for
   for(int i = 0; i < 4; i++)
   {
       // code of calculate
       #pragma omp critical
       {
           // modify a
       }
       // other code
   }
}

Upvotes: 0

Bort
Bort

Reputation: 2491

Judging from your description I am not sure whether parallel execution will help you at all when each thread has to wait for updated information of a.

Anyways, you can update variables without race condition with flush, atomic and critical directives. The best choice heavily depends which threads have to update a or get updated a.

critical: all threads execute the code but each one at a time

atomic: memory is protected against multiple writes and is internally replaced by critical

flush: updates shared variables and is implicitly called by critical

Finally, barrier ensures that all threads have reached the same point in code.

I want to exchange further information between this objects, for example an integer (or a more complex object) "a" should be modified during each calculation (though I can not forecast when and how often, but especially mostly more then once).

This statement is a bit irritating, because you should know when you need your updated a. When you do so, you need to have a barrier in all threads, update a in a critical section and continue execution in parallel. So how many threads update a? A master thread or all of them?

If only one thread has to update a, then another option is the single directive. Its code is executed only by one thread with an implicit barrier and implicit flush after execution. These are the general options for proper updating your complex object a to all threads. Good luck.

Upvotes: 1

Related Questions