steve
steve

Reputation: 133

thread safe function pointer gsl monte carlo integration

I want to use the gsl monte carlo integration library and I'm currently running into thread safety problems which I could not resolve so far.

I have a static function gsl_func_wrapper, that wraps that has the correct type required by gsl (non-member function pointer). That is passed to the gsl monte carlo integrator inside the Integral member function.

class IntegralStrategyGSL2D: public IntegralStrategy2D {
  static Model2D *current_model;

  const gsl_rng_type *T;
  gsl_rng *r;
  gsl_monte_vegas_state *s;
  size_t calls;
  size_t maxcalls;

  static double gsl_func_wrapper(double *x, size_t dim, void *params) {
    return current_model->eval(x);
  }
...
}

IntegralStrategyGSL2D::IntegralStrategyGSL2D() :
    T(gsl_rng_default) {
  calls = 500;    // keep calls low at first
  maxcalls = 100000;
  gsl_rng_env_setup();
  r = gsl_rng_alloc(T);
  s = gsl_monte_vegas_alloc(2);
}

double IntegralStrategyGSL2D::Integral(Model2D *model2d, double xlow,
    double xhigh, double ylow, double yhigh, double precision) {
  current_model = model2d;
  double result, error;

  double xl[2] = { xlow, ylow };
  double xu[2] = { xhigh, yhigh }

  gsl_monte_function G = { &gsl_func_wrapper, 2, 0 };

  gsl_monte_vegas_init(s);

  while (true) {
    gsl_monte_vegas_integrate(&G, xl, xu, 2, calls, r, s, &result, &error);
    ...
  }
  ...
}

All of this works if I run with 1 thread, and i get nan integral once in a while if I use more than one thread. Since gsl requires this non-member function pointer I don't really see right now how I can make this thread safe... Can someone point me in the right direction? Thx

Edit: I added the eval function that will be called by the gsl_func_wrapper.

double GaussianModel2D::eval(const double *x) const {
  // see wikipedia definition

  double normalization = 1.0
        / (2.0 * M_PI * gauss_sigma_var1->getValue()
                * gauss_sigma_var2->getValue()
                * sqrt(1 - pow(gauss_rho->getValue(), 2.0)));

  double exp_value = exp(
        -(pow(x[0] - gauss_mean_var1->getValue(), 2.0)
                / pow(gauss_sigma_var1->getValue(), 2.0)
                + pow(x[1] - gauss_mean_var2->getValue(), 2.0)
                        / pow(gauss_sigma_var2->getValue(), 2.0)
                + 2.0 * gauss_rho->getValue()
                        * (x[0] - gauss_mean_var1->getValue())
                        * (x[1] - gauss_mean_var2->getValue())
                        / (gauss_sigma_var1->getValue()
                                * gauss_sigma_var2->getValue()))
                / (2.0 * (1 - pow(gauss_rho->getValue(), 2.0))));

  return gauss_amplitude->getValue() * normalization * exp_value;
}

Edit2: I added the calls for the Integral function as requested by Mike. So below are the two relevant blocks of code that make the Integral call. I don't really understand why the thread_local keyword for current_model variable fixes the problem.. Event though the code is bit ugly, shouldn't the current_model variable always point to the same model2d pointer? So even if two threads may have conflicting writes the value thats written is always the same so it should not matter or?

double smearing_probability = divergence_model->Integral(int_range, 1e-4);

double Model2D::Integral(const std::vector<DataStructs::DimensionRange &ranges,
    double precision) {
    return integral_strategy->Integral(this, ranges[0].range_low,
        ranges[0].range_high, ranges[1].range_low, ranges[1].range_high,
        precision);
}

Upvotes: 1

Views: 337

Answers (2)

steve
steve

Reputation: 133

Ok so I managed to figure out the problem. It was that the variables

const gsl_rng_type *T; gsl_rng *r; gsl_monte_vegas_state *s;

were all instance variables, hence used by all threads in the same way. If I create each of these within the Integral() function separately for each thread everything runs fine...

Upvotes: 0

MikeMB
MikeMB

Reputation: 21166

Assuming Integral is called in the same thread, that executes gsl_func_wrapper, you can use thread_local instead of static as a storage class for current_model.

That will give you a separate current_model variable for each thread.

Upvotes: 0

Related Questions