smatter
smatter

Reputation: 29178

Non-blocking algorithm and mutual exclusion

Wikipedia says that:

Non-blocking algorithm ensures that threads competing for a shared resource do not have their execution indefinitely postponed by mutual exclusion.

How can one ensure that this is achieved when multiple threads need to access some critical section?

Upvotes: 3

Views: 1312

Answers (2)

SplinterReality
SplinterReality

Reputation: 3410

Please excuse the essay, but that tends to be my writing style.

Non-blocking algorithms have a variety of shapes, but the basic idea is to structure the algorithm such that you finish in a more-or-less deterministic number of steps, instead of being stopped indefinitely by a lock. If you have an operation that can potentially be time consuming, sometimes the best you can do is ensure your thread(s) have other work to move the overall operation of the program forward.

Not all algorithms are suitable for non-blocking methods, and many non-blocking methods have small critical sections in them, and so are not strictly speaking non-blocking, but behave so in practice. Use your head when implementing concurrency of any kind, as it's a bear to debug, whether you're an experienced programmer or not.

-= Non Blocking Algorithms =-

Atomic algorithms perform updates that occur in a single step that cannot be interrupted. A simple example is incrementing a value using ++ or +=, which literally boils down to a single CPU instruction when the update occurs, and so will complete without possibility of being interrupted, and the CPU will fix it if multi-processing breaks the update. I'm not quite sure how far this extends into SIMD instructions where one CPU instruction touches multiple pieces of data.

If you have an algorithm that you don't need a return value from, you could potentially use a producer/consumer-queue system, where only the queue is lock based, or potentially uses atomic updates to insert items into the queue. In either event, the queue is updated quickly, and control is returned to the caller while a consumer thread processes the backlog. Network and Disk writes are typically of this variety.

If you need a return value, a callback can notify your thread when the operation completes, or some new data is available for you to process. Optimally, you have other things to do instead of simply waiting for the callback.

CAS algorithms, where read and updates are ensured to occur on a first to completion wins system is another way of ensuring progress, but aborts take time, and so can create a non-deterministic completion time if the algorithm needs to retry when aborts occur. Databases use a type of CAS for transaction completion called Optimistic Locking. This works best when contention is low, or in cases where contention would require notifying the user and collecting additional input.

-= Blocking Algorithms =-

Blocking algorithms blocks forward progress of all but a single or small multiplicity of thread(s) attempting to operate on a shared resource. If the operation is lengthy, this can cause significant delays, so the goal in ANY blocking algorithm should be to reduce the critical section where contention occurs to as tiny a fraction of the overall algorithm as possible. Database transactions that do this are using Pessimistic Locking.

-= Deadlocks and Live-locks =-

Deadlocks are where two threads are each blocked by each other, usually because each is holding a lock the other wants.

Live-locks can happen where two threads again want access to resources the other controls, or, where two threads are entirely consumed by the data they're sending each other. In either case, the algorithm keeps trying to make forward progress without waiting, but the two (or potentially more) threads keep spinning without making real forward progress.

-= Importance =-

Why is any of this important? The major issue with concurrency of any kind is the possibility of an unpredictable state because of two threads operating on the same data without regard for each other's changes. To prevent this, blocking and non-blocking algorithms are used, but if gotten wrong, you can end up with deadlocks, live-locks or data-corruption.

I hope this emphasizes that concurrency problems are present with BOTH blocking AND non-blocking algorithms, and that regardless of your chosen method of slaying the concurrency beast, you MUST pay close attention to how your program is structured to use, or provide shared resources.

Upvotes: 2

Jeremy
Jeremy

Reputation: 22415

Non-blocking algorithms take advantage of CAS: Java Concurrency: CAS vs Locking

Upvotes: 1

Related Questions