Reputation: 317
Assume I have a field that's accessed concurrently and it's read many times and seldom written to.
public Object myRef = new Object();
Let's say a Thread T1 will be setting myRef to another value, once a minute, while N other Threads will be reading myRef billions of times continuously and concurrently. I only need that myRef is eventually visible to all threads.
A simple solution would be to use an AtomicReference or simply volatile like this:
public volatile Object myRef = new Object();
However, afaik volatile reads do incur a performance cost. I know it's minuscule, this is more like something I wonder rather than something I actually need. So let's not be concerned with performance and assume this a purely theoretical question.
So the question boils down to: Is there way to safely bypass volatile reads for references that are only seldom written to, by doing something at the write site?
After some reading, it looks like memory barriers could be what I need. So if a construct like this existed, my problem would be solved:
Is there such a construct in Java, or in general? At this point I can't help but think if something like this existed, it would have been already incorporated into the atomic packages by the much smarter people maintaining those. (Disproportionately frequent read vs write might not have been a case to care for?) So maybe there is something wrong in my thinking and such a construct is not possible at all?
I have seen some code samples use 'volatile' for a similar purpose, exploiting it's happen-before contract. There is a separate sync field e.g.:
public Object myRef = new Object();
public volatile int sync = 0;
and at writing thread/site:
myRef = new Object();
sync += 1 //volatile write to emulate barrier
I am not sure this works, and some argue this works on x86 architecture only. After reading related sections in JMS, I think it's only guaranteed to work if that volatile write is coupled with a volatile read from the threads who need to see the new value of myRef. (So doesn't get rid of the volatile read).
Returning to my original question; is this possible at all? Is it possible in Java? Is it possible in one of the new APIs in Java 9 VarHandles?
Upvotes: 8
Views: 277
Reputation: 11307
X86 provides TSO; you get [LoadLoad][LoadStore][StoreStore] fences for free.
A volatile read requires release semantics.
r1=Y
[LoadLoad]
[LoadStore]
...
And as you can see, this is already provided by the X86 for free.
In your case most of the calls are a read and the cacheline will already be in the local cache.
There is a price to pay on compiler level optimizations, but on a hardware level, a volatile read is just as expensive as a regular read.
On the other hand the volatile write is more expensive because it requires a [StoreLoad] to guarantee sequential consistency (in the JVM this is done using a lock addl %(rsp),0
or an MFENCE). Since writes are very seldom in your situation, this isn't an issue.
I would be careful with optimizations on this level because it is very easy to make the code more complex than is actually needed. Best to guide your development efforts by some benchmarks e.g. using JMH and preferably test it on real hardware. Also there could be other nasty creatures hidden like false sharing.
Upvotes: 0
Reputation: 7620
ReentrantReadWriteLock
which is designed for few writes many reads scenario.You can use StampedLock
which is designed for the same case of few writes many reads, but also reads can be attempted optimistically. Example:
private StampedLock lock = new StampedLock();
public void modify() { // write method
long stamp = lock.writeLock();
try {
modifyStateHere();
} finally {
lock.unlockWrite(stamp);
}
}
public Object read() { // read method
long stamp = lock.tryOptimisticRead();
Object result = doRead(); //try without lock, method should be fast
if (!lock.validate(stamp)) { //optimistic read failed
stamp = lock.readLock(); //acquire read lock and repeat read
try {
result = doRead();
} finally {
lock.unlockRead(stamp);
}
}
return result;
}
Make your state immutable and allow controlled modifications only by cloning the existing object and altering only necessary properties via constructor. Once the new state is constructed, you assign it to the reference being read by the many reading threads. This way reading threads incur zero cost.
Upvotes: 0
Reputation: 718698
So basically you want the semantics of a volatile
without the runtime cost.
I don't think it is possible.
The problem is that the runtime cost of volatile
is due the instructions that implement the memory barriers in the writer and the reader code. If you "optimize" the reader by getting rid of its memory barrier, then you are no longer guaranteed that the reader will see the "seldomly written" new value when it is actually written.
FWIW, some versions of the sun.misc.Unsafe
class provide explicit loadFence
, storeFence
and fullFence
methods, but I don't think that using them will give any performance benefit over using a volatile
.
Hypothetically ...
what you want is for one processor in a multi-processor system to be able to tell all of the other processors:
"Hey! Whatever you are doing, invalidate your memory cache for address XYZ, and do it now."
Unfortunately, modern ISAs don't support this.
In practice, each processor controls its own cache.
Upvotes: 2
Reputation: 739
Not quite sure if this is correct but I might solve this using a queue.
Create a class that wraps an ArrayBlockingQueue attribute. The class has an update method and a read method. The update method posts the new value onto the queue and removes all values except the last value. The read method returns the result of a peek operation on the queue, i.e. read but do not remove. Threads peeking the element at the front of the queue do so unimpeded. Threads updating the queue do so cleanly.
Upvotes: 0