Reputation: 1
Conceptually,
Mutex
Reader's/Writer lock (Better form of Mutex)
Semaphore
Condition Variable
are used as four major synchronization mechanisms, which are purely lock based. Different programming language have different terms/jargon for these 4 mechanisms. POSIX pthread
package is one such example for such implementation.
First two get implemented using spin lock(Busy-wait).
Last two get implemented using sleep lock.
Lock based synchronisation is expensive in terms of cpu cycles.
But, I learnt that java.util.concurrent
packages do not use lock(sleep/spin) based mechanism to implement synchronisation.
My question:
What is the mechanism used by java concurrent package to implement synchronization? Because spin lock is cpu intensive and sleep lock is more costlier than spin lock due to frequent context switch.
Upvotes: 0
Views: 638
Reputation: 6162
The OP's question and the comment exchanges appear to contain quite a bit of confusion. I will avoid answering the literal questions and instead try to give an overview.
Why does java.util.concurrent
become today's recommended practice?
Because it encourages good application coding patterns. The potential performance gain (which may or may not materialize) is a bonus, but even if there is no performance gain, java.util.concurrent
is still recommended because it helps people write correct code. Code that is fast but is flawed has no value.
How does java.util.concurrent
encourage good coding patterns?
In many ways. I will just list a few.
(Disclaimer: I come from a C# background and do not have comprehensive knowledge of Java's concurrent package; though a lot of similarities exist between the Java and C# counterparts.)
Concurrent data collections simplifies code.
Two broad categories of concurrent data collection classes
There are two flavors of concurrent data collection classes. They are designed for very different application needs. To benefit from the "good coding patterns", you must know which one to use given each situation.
There is also a hybrid: blocking concurrent data collections that allow one to do a quick (non-blocking) check to see if the operation might succeed. This quick check can suffer from the "Time of check to time of use" race condition, but if used correctly it can be useful to some algorithms.
Before the java.util.concurrent
package becomes available, programmers often had to code their own poor-man's alternatives. Very often, these poor alternatives have hidden bugs.
Besides data collections?
Callable
, Future
, and Executor
are very useful for concurrent processing. One could say that these patterns offer something remarkably different from the imperative programming paradigm.
Instead of specifying the exact order of execution of a number of tasks, the application can now:
Callable
allows packaging "units of work" with the data that will be worked on,Future
provides a way for different units of work to express their order dependencies - which work unit must be completed ahead of another work unit, etc.
Callable
instances don't indicate any order dependencies, then they can potentially be executed simultaneously, if the machine is capable of parallel execution.Executor
specifies the policies (constraints) and strategies on how these units of work will be executed.One big thing which was reportedly missing from the original java.util.concurrent
is the ability to schedule a new Callable
upon the successful completion of a Future
when it is submitted to an Executor
. There are proposals calling for a ListenableFuture
.
(In C#, the similar unit-of-work composability is known as Task.WhenAll
and Task.WhenAny
. Together they make it possible to express many well-known multi-threading execution patterns without having to explicitly create and destroy threads with own code.)
Upvotes: 0
Reputation: 14690
The short answer is no.
Concurrent collections are not implemented with locks compared to synchronized collections.
I myself had the exact same issue as what is asked, wanted to always understand the details. What helped me ultimately to fully understand what's going on under the hood was to read the following chapter in java concurrency in practice:
5.1 Synchronized collections
5.2 Concurrent collections
The idea is based on doing atomic operations, which basically requires no lock, since they are atomic.
Upvotes: 0
Reputation: 14541
That very much depends on what parts of the java.util.concurrent package you use (and to a lesser degree on the implementation). E.g. the LinkedBlockingQueue as of Java 1.7 uses both ReentrantLocks and Conditions, while e.g. the java.util.concurrent.atomic classes or the CopyOnWrite* classes rely on volatiles + native methods (that insert the appropriate memory barriers).
The actual native implementation of Locks, Semaphores, etc. also varies between architectures and implementations.
Edit: If you really care about performance, you should measure performance of your specific workload. There are folks far more clever than me like A. Shipilev (whose site is a trove of information on this topic) on the JVM team, who do this and care deeply about JVM performance.
Upvotes: 3
Reputation: 40356
This question is best answered by looking at the source code for java.util.concurrent
. The precise implementation depends on the class you are referring to.
For example, many of the implementations make use of volatile
data and sun.misc.Unsafe
, which defers e.g. compare-and-swap to native operations. Semaphore
(via AbstractQueuedSynchronizer
) makes heavy use of this.
You can browse through the other objects there (use the navigation pane on the left of that site) to take a look at the other synchronization objects and how they are implemented.
Upvotes: 2