Reputation: 103
I'm new here and a beginner level programmer in C. I'm having some problem with using openmp to speedup the for-loop. Below is simple example:
#include <stdlib.h>
#include <stdio.h>
#include <gsl/gsl_rng.h>
#include <omp.h>
gsl_rng *rng;
main()
{
int i, M=100000000;
double tmp;
/* initialize RNG */
gsl_rng_env_setup();
rng = gsl_rng_alloc (gsl_rng_taus);
gsl_rng_set (rng,(unsigned long int)791526599);
// option 1: parallel
#pragma omp parallel for default(shared) private( i, tmp ) schedule(dynamic)
for(i=0;i<=M-1;i++){
tmp=gsl_ran_gamma_mt(rng, 4, 1./3 );
}
// option 2: sequential
for(i=0;i<=M-1;i++){
tmp=gsl_ran_gamma_mt(rng, 4, 1./3 );
}
}
The code draws from a gamma random distribution for M iterations. It turns out the parallel approach with openmp (option 1) takes about 1 minute while the sequential approach (option 2) takes only 20 seconds. While running with openmp, I can see the cpu usage is 800% ( the server I'm using has 8 CPUs ). And the system is linux with GCC 4.1.3. The compile command I'm using is gcc -fopenmp -lgsl -lgslcblas -lm (I'm using GSL )
Am I doing something wrong? Please help me! Thanks!
P.S. As pointed out by some users, it might be caused by rng. But even if I replace
tmp=gsl_ran_gamma_mt(rng, 4, 1./3 );
by say
tmp=1000*10000;
the problem still there...
Upvotes: 9
Views: 2574
Reputation: 103
Again thanks everyone for helping. I just found out that if I get rid of
schedule(dynamic)
in the code, the problem disapears. But why is that?
Upvotes: 1
Reputation: 4671
Your rng
variable is shared, so the threads are spending all their time waiting to be able to use the random number generator. Give each thread a separate instance of the RNG. This will probably mean making the RNG initialization code run in parallel as well.
Upvotes: 5
Reputation: 546015
gsl_ran_gamma_mt
probably locks on rng
to prevent concurrency issues (if it didn’t, your parallel code probably contains a race condition and thus yields wrong results). The solution then would be to have a separate rng
instance for each thread, thus avoiding locking.
Upvotes: 12