Mayukh Sarkar
Mayukh Sarkar

Reputation: 2625

How to synchronize threads without blocking?

Now as far as I know mutex used for syncing all the thread which are sharing same data by following a principle that when one thread is using that data all other thread should be blocked while using that common resource until it is unlocked...now recently in a blogpost I have seen a code explaining this concept and some people wrote that blocking all the threads while one thread is accessing the resources is a very bad idea and it goes against the concept of threading which is true somehow.. Then my question is how to synchronize threads without blocking?

Here is the link of that blogpost

http://www.thegeekstuff.com/2012/05/c-mutex-examples/

Upvotes: 4

Views: 3056

Answers (5)

David Schwartz
David Schwartz

Reputation: 182883

blocking all the threads while one thread is accessing the resources is a very bad idea and it goes against the concept of threading which is true somehow

This is a fallacy. Locks block only contending threads, allowing all non-contending threads to run concurrently. Running the work that's the most efficient to run at any particular time rather than forcing any particular ordering is not against the concept of threading at all.

Now if so many of your threads contend so badly that blocking contending threads is harming performance, there are two possibilities:

  1. Most likely you have a very poor design and you should fix it. Don't blame the locks for a high-contention design.

  2. You are in the rare case where other synchronization mechanisms are more appropriate (such as lock-free collections). But this requires significant expertise and analysis of the specific use case to find the best solution.

Generally, if your use case is a perfect fit for atomics, use them. Otherwise, mutexes (possibly in combination with condition variables) should be your first thought. That will cover 99% of the cases a typical multi-threaded C programmer will face.

Upvotes: 1

Jim Wood
Jim Wood

Reputation: 961

You cannot synchronize threads without blocking by the very definition of synchronization. However, good synchronization technique will limit the scope of where things are blocked to the absolute minimum. To illustrate, and point out exactly why the article is wrong consider the following:

From the article:

pthread_t tid[2];
int counter;
pthread_mutex_t lock;

void* doSomeThing(void *arg)
{
    pthread_mutex_lock(&lock);

    unsigned long i = 0;
    counter += 1;
    printf("\n Job %d started\n", counter);

    for(i=0; i<(0xFFFFFFFF);i++);

    printf("\n Job %d finished\n", counter);

    pthread_mutex_unlock(&lock);

    return NULL;
}

What it should be:

pthread_t tid[2];
int counter;
pthread_mutex_t lock;

void* doSomeThing(void *arg)
{
    unsigned long i = 0;

    pthread_mutex_lock(&lock);
    counter += 1;
    int myJobNumber = counter;
    pthread_mutex_unlock(&lock);

    printf("\n Job %d started\n", myJobNumber);

    for(i=0; i<(0xFFFFFFFF);i++);

    printf("\n Job %d finished\n", myJobNumber);

    return NULL;
}

Notice that in the article, the work being done (the pointless for loop) is done while holding the lock. This is complete nonsense, since the work is supposed to be done concurrently. The reason the lock is needed is only to protect the counter variable. Thus the threads only need to hold the lock when changing that variable as in the second example.

Mutex locks protect the critical section of code, which are those areas of code which only 1 thread at a time should touch - and all the other threads must block if trying to access the critical section at the same time. However, if thread 1 is in the critical section, and thread 2 is not, then it's perfectly fine for both to run concurrently.

Upvotes: 2

Isaiah van der Elst
Isaiah van der Elst

Reputation: 1445

There are a number of tricks that can be used to avoid concurrent bottle necks.

  1. Immutable Data Structures. The idea here is that concurrent reads are okay, but writes are not. To implement something like this you basically need to think of business units as factories to these immutable data structures which are used by other business units.
  2. Asynchronous-Callbacks. This is the essence of event-driven development. If you have concurrent tasks, use the observer pattern to execute some logic when a resource becomes available. Basically we execute some code up until a shared resource is needed then add a listener for when the resource becomes available. This typically results in less readable code and heaver strain on the stack, but you never block a thread waiting on a resource. If you have the tasks ready to keep the CPUs running hot, this pattern will do it for you.

Even with these tools, you'll never completely remove the need for some synchronization (counters come to mind).

Upvotes: 0

Eugene
Eugene

Reputation: 7268

The term you are looking for is lock free data structures.

General idea is that the state shared between threads is contorted into one of those.

Implementations of those vary and often are compiler or platform specific. For example MSVC has a set of _Interlocked* functions to perform simple atomic operations.

Upvotes: 1

Joe
Joe

Reputation: 7818

You can use pthread_mutex_trylock() to attempt a lock. If that fails then you know you would have blocked. You can't do what you want to do, but your thread is not blocked, so it can attempt to do something else. I think most of the comments on that blog are about avoiding contention between threads though, i.e. that maximising multi-threaded performance is about avoiding threads working on the same resource at the same time. If you avoid that by design then by design you don't need locks as you never have contention.

Upvotes: 0

Related Questions