Reputation: 37034
Quote from book java concurrency in practice:
The performance cost of synchronization comes from several sources. The visibility guarantees provided by synchronized and volatile may entail using special instructions called memory barriers that can flush or invalidate caches, flush hardware write buffers, and stall execution pipelines. Memory barriers may also have indirect performance consequences because they inhibit other compiler optimizations; most operations cannot be reordered with memory barriers. When assessing the performance impact of synchronization, it is important to distinguish between contended and uncontended synchronization. The synchronized mechanism is optimized for the uncontended case (volatile is always uncontended), and at this writing, the performance cost of a "fastͲpath" uncontended synchronization ranges from 20 to 250 clock cycles for most systems.
Can you clarify this more clear? What if I have huge amount threads which read volaile variable ?
Can you provide contention definition?
Is there tool to meausure contention? In which values it is measures?
Upvotes: 2
Views: 455
Reputation: 10529
Can you clarify this more clear?
That is one dense paragraph that touches a lot of topics. Which topic or topics specifically are you asking for clarification? Your question is too broad to answer satisfactorily. Sorry.
Now, if you question is specific to uncontended synchronization, it means that threads within a JVM do not have to block, get unblocked/notified and then go back to a blocked state.
Under the hood, the JVM uses hardware specific memory barriers that ensure
volatile
field is always read and written to/from main memory, not from the CPU/core cache, and There is no contention. When you use a synchronized block OTH, all your threads are in a blocked state except one, the one reading whatever data is being protected by the synchronized block.
Let's call that thread, the one accessing the synchronized data, thread A.
Now, here is the kicker, when thread A is done with the data and exists the synchronized block, this causes the JVM to wake up all the other threads that are/were waiting for thread A to exit the synchronization block.
They all wake up (and that is expensive CPU/memory wise). And they all race trying to get a hold of the synchronization block.
Imagine a whole bunch of people trying to exit a crowded room through a tiny room. Yep, like that, that's how threads act when they try to grab a synchronization lock.
But only one gets it and gets in. All the others go back to sleep, kind of, in what is called a blocked state. This is also expensive, resource wise.
So every time one of the threads exists a synchronization block, all the other threads go crazy (best mental image I can think of) to get access to it, one gets it, and all the others go back to a blocked state.
That's what makes synchronized blocks expensive. Now, here is the caveat: It used to be very expensive pre JDK 1.4. That's 17 years ago. Java 1.4 started seeing some improvements (2003 IIRC).
Then Java 1.5 introduced even greater improvements in 2005, 12 years ago, which made synchronized blocks less expensive.
It is important to keep such things in mind. There is a lot of outdated information out there.
What if I have huge amount threads which read volaile variable ?
It wouldn't matter that much in terms of correctness. A volatile
field will always show a consistent value regardless of the number of threads.
Now, if you have a very large number of threads, performance can suffer because of context switches, memory utilization, etc (and not necessarily and/or primarily because of accessing a volatile
field.
Can you provide contention definition?
Please don't take it the wrong way, but if you are asking that question, I'm afraid you are not fully prepared to use a book like the one you are reading.
You will need a more basic introduction to concurrency, and contention specifically.
https://en.wikipedia.org/wiki/Resource_contention
Best regards.
Upvotes: 4