Reputation: 15351
After reading this question and this (especially second answer) I am massively confused about volatile and its semantics with respect to memory barriers.
In the above examples, we write to a volatile variable, which causes an mfence, which in turn flushes all pending store buffers/load buffers to main cache, invalidating other cache lines.
However, the non-volatile fields could be optimized and be stored in registers for example? So how can we be sure that given a write to volatile variable ALL state changes prior to it will be visible? What if we cahnge 1000 things?
Upvotes: 2
Views: 1229
Reputation: 116918
In the above examples, we write to a volatile variable, which causes an mfence, which in turn flushes all pending store buffers/load buffers to main cache...
This is correct.
invalidating other cache lines.
This is not correct or at least is misleading. It is not the write memory-barrier which invalidates the other cache lines. It is the read memory-barrier running in the other processors which invalidates each processor's cache lines. Memory synchronization is cooperative action between the thread writing and the other threads reading from volatile
variables.
The Java memory model actually guarantees that only a read of the same variable that was written to will the variable be guaranteed to be updated. The reality is that all memory cache lines are flushed upon a write memory barrier being crossed and all memory cache lines are invalidated when a read memory barrier is crossed – regardless of the variable being accessed.
However, the non-volatile fields could be optimized and be stored in registers for example? So how can we be sure that given a write to volatile variable ALL state changes prior to it will be visible? What if we change 1000 things?
According to this documentation (and others), memory barriers also cause the compiler to generate code that flushes registers as well. To quote:
... while with barrier() the compiler must discard the value of all memory locations that it has currently cached in any machine registers.
Upvotes: 1
Reputation: 2641
Volatile variables share the visibility features of synchronized, but none of the atomicity features. This means that threads will automatically see the most up-to-date value for volatile variables.
You can use volatile variables instead of locks only under a restricted set of circumstances. Both of the following criteria must be met for volatile variables to provide the desired thread-safety:
Writes to the variable do not depend on its current value.
The variable does not participate in invariant with other variables.
Upvotes: 1
Reputation: 65046
The compiler is responsible for ensuring the semantics of the memory model are preserved. In your example, the compiler would ensure that any cross-thread visible values are written to memory prior to the volatile write (on x86 a plain store is sufficient for this purpose).
Upvotes: 0
Reputation: 1025
The guarantee that the JMM gives is - if Thread 1 writes a volatile variable and after that Thread 2 reads that same volatile variable, then Thread 2 is guaranteed to see all changes made by Thread 1 prior to writing the volatile variable (including changes made to non-volatile variables). This is a strong guarantee that exists and everyone agrees to.
However, the guarantee applies only to what Thread 2 sees. You may still have another thread, Thread 3, which may NOT see up-to-date values for the non-volatile fields set by Thread 1 (Thread 3 may have, and is allowed to, cache values for those non-volatile fields). Only after Thread 3 reads the same volatile is it guaranteed to see non-volatile writes from Thread 1
Upvotes: 4