Reputation: 903
I understand that synchronization can be used to enforce variable visibility being updated by more than one thread. i.e. Suppose we have an variable shared by two threads. If we update it in one thread, the other one is not guaranteed to see the newly updated value unless we properly synchronize the access to this variable.
But I want to know what happened under the cover to cause this. Can someone let me know?
Many thanks.
Upvotes: 2
Views: 65
Reputation: 17874
This issue occurs on multi-processor CPU's. If the variable is not volatile, the JIT compiler will optimize its use to be kept in cache or in the registers of each cpu core separately. Threads running simultaneously on other cores will not see changes to the variable until it's saved to RAM or the cache is flushed.
If you write to a volatile variable or another synchronisation, all CPU caches are flushed immediately.
Upvotes: 0
Reputation: 533750
There are many other way to ensure visibility such as volatile, ordered/lazy set.
When you enter a synchronized block it performs a read barrier. This means all reads after it will be consistent. When you exit a synchronized block it performs a write barrier. This ensures that all writes are in a consistent order. The actual details of how this is done is determined by the CPU, but for x86/x64 it is a single machine code instruction or prefix.
Upvotes: 3