OddFunction
OddFunction

Reputation: 1

OpenMP Deadlock utilizing omp_set_lock

I have a function with the following general structure:

void a_generic_function(int N, int *arr, int *superarr)
{
  //some code
  for (int i = 0; i < N; i++)
  {
    omp_init_lock(&(lock[i])); //Initializes N locks, lock is allocated dynamic mem previously.
  }
  #pragma omp parallel shared(lock)
  {
    #pragma omp for
    {
      for (int i = 1; i <= N; i++)
      {
        //some code
        for (int j = arr[i-1]; j < arr[i]; i++) //where arr is size N+1
        {
          //some code
          for (int k = 0; k < temp; k++) //where temp < N
          {
            omp_set_lock(&(lock[subarr[k]])); //subarr is size <= N
            superarr[subarr[k]] += temp-1; //superarr is size = N. Temp is an int value.
            omp_unset_lock(&(lock[subarr[k]]));
          }
        }
      }
    }
  }
}

There is only a single point in this code with restriction on entry by threads, which should be immediately be unlocked as soon as the critical operation is complete, yet this function will often deadlock. I cannot understand what would cause this.

(For completeness sake: there is no parallellization outside this function)

Upvotes: 0

Views: 308

Answers (1)

Ernir Erlingsson
Ernir Erlingsson

Reputation: 2170

Too many "// some code" to be sure, but to me the issue is that you're not releasing the same locks you set, i.e. the value of subarr[k] changes after you call omp_set_lock and before omp_unset_lock. One possible solution would be to make a thread-private variable, e.g.

int n = subarr[k];
omp_set_lock(&lock[n]);
// do stuff
omp_unset_lock(&(lock[n]));

This ensures that the thread is releasing the same lock it set.

As a bonus: you can safely put #pragma omp parallel for on your lock init loop for improved performance.

Upvotes: 1

Related Questions