Aquarius_Girl
Aquarius_Girl

Reputation: 22916

What is the point of running same code under different threads - openMP?

From: https://bisqwit.iki.fi/story/howto/openmp/

The parallel construct

The parallel construct starts a parallel block. It creates a team of N threads (where N is determined at runtime, usually from the
number of CPU cores, but may be affected by a few things), all of which execute the next statement (or the next block, if the statement is a {…} -enclosure). After the statement, the threads join back into one.

#pragma omp parallel  
   {  
     // Code inside this region runs in parallel.  
     printf("Hello!\n");  
   }

I want to understand what is the point of running same code under different threads. In what kind of cases it can be helpful?

Upvotes: 0

Views: 237

Answers (4)

alb_j
alb_j

Reputation: 85

I want to understand what is the point of running same code under different threads. In what kind of cases it can be helpful?

One example: in physics you got a random process(collision, initial maxwellian etc) in your code and you need to run the code many times to get the average results, in this case you need to run the same code several times.

Upvotes: 0

Stephen C
Stephen C

Reputation: 718787

I want to understand what is the point of running same code under different threads. In what kind of cases it can be helpful?

When you are running the same code on different data.

For example, if I want to invert 10 matrices, I might run the matrix inversion code on 10 threads ... to get (ideally) a 10-fold speedup compared to 1 thread and a for loop.

Upvotes: 1

Emanuele Giona
Emanuele Giona

Reputation: 781

By using omp_get_thread_num() you can retrieve the thread ID which enables you to parametrize the so called "same code" with respect to that thread ID.

Take this example:

A is a 1000-dimensional integer array and you need to sum its values using 2 OpenMP threads.

You would design you code something like this:

int A_dim = 1000
long sum[2] = {0,0}
#pragma omp parallel  
   { 
     int threadID = omp_get_thread_num();
     int start = threadID * (A_dim / 2)
     int end = (threadID + 1) * (A_dim / 2)
     for(int i = start; i < end; i++)
       sum[threadID] += A[i]
   }

start is the lower bound which your thread will start summing from (example: thread #0 will start summing from 0, while thread #1 will start summing from 500).

end is pretty much the same of start, but it's the upper bound of which array index the thread will sum up to (example: thread #0 will sum until 500, summing values from A[0] to A[499], while thread #1 will sum until 1000 is reached, values from A[500] to A[999])

Upvotes: 1

Henkersmann
Henkersmann

Reputation: 1220

The basic idea of OpenMP is to distribute work. For this you need to create some threads.

The parallel construct creates this number of threads. Afterwards you can distibute/share work with other constructs like omp for or omp task.

A possible benefit of this distinction is e.g. when you have to allocate memory for each thread (i.e. thread-local data).

Upvotes: 0

Related Questions