overexchange
overexchange

Reputation: 1

Is java concurrent package implemented using locks?

Conceptually,

Mutex

Reader's/Writer lock (Better form of Mutex)

Semaphore

Condition Variable

are used as four major synchronization mechanisms, which are purely lock based. Different programming language have different terms/jargon for these 4 mechanisms. POSIX pthread package is one such example for such implementation.

First two get implemented using spin lock(Busy-wait).

Last two get implemented using sleep lock.

Lock based synchronisation is expensive in terms of cpu cycles.

But, I learnt that java.util.concurrent packages do not use lock(sleep/spin) based mechanism to implement synchronisation.

My question:

What is the mechanism used by java concurrent package to implement synchronization? Because spin lock is cpu intensive and sleep lock is more costlier than spin lock due to frequent context switch.

Upvotes: 0

Views: 638

Answers (4)

rwong
rwong

Reputation: 6162

The OP's question and the comment exchanges appear to contain quite a bit of confusion. I will avoid answering the literal questions and instead try to give an overview.


Why does java.util.concurrent become today's recommended practice?

Because it encourages good application coding patterns. The potential performance gain (which may or may not materialize) is a bonus, but even if there is no performance gain, java.util.concurrent is still recommended because it helps people write correct code. Code that is fast but is flawed has no value.

How does java.util.concurrent encourage good coding patterns?

In many ways. I will just list a few.

(Disclaimer: I come from a C# background and do not have comprehensive knowledge of Java's concurrent package; though a lot of similarities exist between the Java and C# counterparts.)

Concurrent data collections simplifies code.

  • Often, we use locking when we need to access and modify a data structure from different threads.
  • A typical operation involves:
    • Lock (blocked until succeed),
    • Read and write values,
    • Unlock.
  • Concurrent data collections simplify this by rolling all these operations into a single function call. The result is:
    • Simpler code on the caller's side,
    • Possibly more optimized, because the library implementation can possibly use a different (and more efficient) locking or lock-free mechanism than the JVM object monitor.
    • Avoids a common pitfall of race condition: Time of check to time of use.

Two broad categories of concurrent data collection classes

There are two flavors of concurrent data collection classes. They are designed for very different application needs. To benefit from the "good coding patterns", you must know which one to use given each situation.

  • Non-blocking concurrent data collections
    • These classes can guarantee a response (returning from a method call) in a deterministic amount of time - whether the operation succeeds or fails. It never deadlocks or wait forever.
  • Blocking concurrent data collections
    • These classes make use of JVM and OS synchronization features to link together data operations with thread control.
    • As you have mentioned, they use sleep locks. If a blocking operation on a blocking concurrent data collection is not satisfied immediately, the thread requesting this operation goes into sleep, and will be waken up when the operation is satisfied.

There is also a hybrid: blocking concurrent data collections that allow one to do a quick (non-blocking) check to see if the operation might succeed. This quick check can suffer from the "Time of check to time of use" race condition, but if used correctly it can be useful to some algorithms.

Before the java.util.concurrent package becomes available, programmers often had to code their own poor-man's alternatives. Very often, these poor alternatives have hidden bugs.

Besides data collections?

Callable, Future, and Executor are very useful for concurrent processing. One could say that these patterns offer something remarkably different from the imperative programming paradigm.

Instead of specifying the exact order of execution of a number of tasks, the application can now:

  • Callable allows packaging "units of work" with the data that will be worked on,
  • Future provides a way for different units of work to express their order dependencies - which work unit must be completed ahead of another work unit, etc.
    • In other words, if two different Callable instances don't indicate any order dependencies, then they can potentially be executed simultaneously, if the machine is capable of parallel execution.
  • Executor specifies the policies (constraints) and strategies on how these units of work will be executed.

One big thing which was reportedly missing from the original java.util.concurrent is the ability to schedule a new Callable upon the successful completion of a Future when it is submitted to an Executor. There are proposals calling for a ListenableFuture.

(In C#, the similar unit-of-work composability is known as Task.WhenAll and Task.WhenAny. Together they make it possible to express many well-known multi-threading execution patterns without having to explicitly create and destroy threads with own code.)

Upvotes: 0

apadana
apadana

Reputation: 14690

The short answer is no.

Concurrent collections are not implemented with locks compared to synchronized collections.

I myself had the exact same issue as what is asked, wanted to always understand the details. What helped me ultimately to fully understand what's going on under the hood was to read the following chapter in java concurrency in practice:

5.1 Synchronized collections
5.2 Concurrent collections

The idea is based on doing atomic operations, which basically requires no lock, since they are atomic.

Upvotes: 0

llogiq
llogiq

Reputation: 14541

That very much depends on what parts of the java.util.concurrent package you use (and to a lesser degree on the implementation). E.g. the LinkedBlockingQueue as of Java 1.7 uses both ReentrantLocks and Conditions, while e.g. the java.util.concurrent.atomic classes or the CopyOnWrite* classes rely on volatiles + native methods (that insert the appropriate memory barriers).

The actual native implementation of Locks, Semaphores, etc. also varies between architectures and implementations.

Edit: If you really care about performance, you should measure performance of your specific workload. There are folks far more clever than me like A. Shipilev (whose site is a trove of information on this topic) on the JVM team, who do this and care deeply about JVM performance.

Upvotes: 3

Jason C
Jason C

Reputation: 40356

This question is best answered by looking at the source code for java.util.concurrent. The precise implementation depends on the class you are referring to.

For example, many of the implementations make use of volatile data and sun.misc.Unsafe, which defers e.g. compare-and-swap to native operations. Semaphore (via AbstractQueuedSynchronizer) makes heavy use of this.

You can browse through the other objects there (use the navigation pane on the left of that site) to take a look at the other synchronization objects and how they are implemented.

Upvotes: 2

Related Questions