GazTheDestroyer
GazTheDestroyer

Reputation: 21261

Does locking ensure reads and writes are flushed from caches? If so, how?

I was reading this MSDN article on lockless thread syncing. The article seems to infer that as long as you enter a lock before accessing shared variables, then those variables will be up to date (in .Net 2.0 at least).

I got to thinking how this was possible? A lock in .Net is just some arbitrary object that all threads check before accessing memory, but the lock itself has no knowledge of the memory locations that are being accessed.

If I have a thread updating a variable, or even a whole chunk of memory, How are those updates guaranteed to be flushed from CPU caches when entering / exiting a lock? Are ALL memory accesses effectively made volatile inside the lock?

Upvotes: 9

Views: 1405

Answers (4)

Ringding
Ringding

Reputation: 2856

I’m not sure about the state of affairs in .NET, but in Java it is clearly stated that any two threads cooperating in such a way must use the same object for locking in order to benefit from what you say in your introductory statement, not just any lock. This is a crucial distinction to make.

A lock doesn’t need to “know” what it protects; it just needs to make sure that everything that has been written by the previous locker is made available to another locker before letting it proceed.

Upvotes: 0

Bahbar
Bahbar

Reputation: 18015

Not all c# memory reads and writes are volatile, no. (imagine if that was the case performance-wise!)

But.

How are those updates guaranteed to be flushed from CPU caches when entering / exiting a lock

CPU caches are CPU specific, however they all have some form of memory coherence protocol. That is to say, when you access some memory from a core, if it is present in another core cache, the protocol the CPU uses will ensure that the data gets delivered to the local core.

What Petar Ivanov alludes to in his answer is however very relevant. You should check out memory consistency model if you want to understand more what his point is.

Now, how C# guarantees that the memory is up-to-date is up to the C# implementers, and Eric Lippert's blog is certainly a good place to understand the underlying issues.

Upvotes: 1

Petar Ivanov
Petar Ivanov

Reputation: 93090

Well, the article explains it:

  1. Reads cannot move before entering a lock.

  2. Writes cannot move after exiting a lock.

And more explanation from the same article:

When a thread exits the lock, the third rule ensures that any writes made while the lock was held are visible to all processors. Before the memory is accessed by another thread, the reading thread will enter a lock and the second rule ensures that the reads happen logically after the lock was taken.

Upvotes: 1

Polity
Polity

Reputation: 15140

Check the work of Eric Lippert: http://blogs.msdn.com/b/ericlippert/archive/2011/06/16/atomicity-volatility-and-immutability-are-different-part-three.aspx

Locks guarantee that memory read or modified inside the lock is observed to be consistent, locks guarantee that only one thread accesses a given hunk of memory at a time, and so on.

So yes, as long as you lock each time before accessing shared resources, you can be pretty sure its up to date

EDIT look up the following post for more information and a very usefull overview: http://igoro.com/archive/volatile-keyword-in-c-memory-model-explained/

Upvotes: 6

Related Questions