Reputation: 389
The ECMA-335 specification states the following:
*Acquiring a lock (System.Threading.Monitor.Enter or entering a synchronized method) shall implicitly perform a volatile read operation, and releasing a lock (System.Threading.Monitor.Exit or leaving a synchronized method) shall implicitly perform a volatile write operation. (...)
A volatile read has acquire semantics meaning that the read is guaranteed to occur prior to any references to memory that occur after the read instruction in the CIL instruction sequence. A volatile write has release semantics meaning that the write is guaranteed to happen after any memory references prior to the write instruction in the CIL instruction sequence.*
This means that compilers cannot move statements out of Monitor.Enter/Monitor.Exit blocks, but other statements are not forbidden to be moved into the block. Perhaps, even another Monitor.Enter could be moved into the block (as volatile write followed by a volatile read can be swapped). So, could the following code:
class SomeClass
{
object _locker1 = new object();
object _locker2 = new object();
public void A()
{
Monitor.Enter(_locker1);
//Do something
Monitor.Exit(_locker1);
Monitor.Enter(_locker2);
//Do something
Monitor.Exit(_locker2);
}
public void B()
{
Monitor.Enter(_locker2);
//Do something
Monitor.Exit(_locker2);
Monitor.Enter(_locker1);
//Do something
Monitor.Exit(_locker1);
}
}
, be turned into an equivalent of the followig:
class SomeClass
{
object _locker1 = new object();
object _locker2 = new object();
public void A()
{
Monitor.Enter(_locker1);
//Do something
Monitor.Enter(_locker2);
Monitor.Exit(_locker1);
//Do something
Monitor.Exit(_locker2);
}
public void B()
{
Monitor.Enter(_locker2);
//Do something
Monitor.Enter(_locker1);
Monitor.Exit(_locker2);
//Do something
Monitor.Exit(_locker1);
}
}
, possibly leading to deadlocks? Or am I missing anything?
Upvotes: 6
Views: 1426
Reputation: 16162
When you use lock
or Monitor.Enter
and Monitor.Exit
this is full fences, that mean it will create a "barrier" in memory Thread.MemoryBarrier()
at the beggening or the lock "Monitor.Enter
" and before the end of the lock "Monitor.Exit
". So no operation will move before and after the lock, but note that the operations within the lock itself can be swapped from other threads perspectives, but that is never been an issue since the lock will guaranteeing mutual exclusive, so only one thread will execute the code within the lock at the same time. Anyway the reordering however will not occur in a single thread, that is is, when multithread enter the same code region, they may see the instructions not in the same order.
I strongly recommend you to read more about MemoryBarrier
and full and half fences at this article.
Edit: Note that here I am describing the fact that lock
is full fence, but not talking about the "dead lock" that you are aware of, the scenario that you describe will never occurs, because like @Hans mentioned that the reordering will never occurs for a method calls, i.e:
Method1();
Method2();
Method3();
Will always execute sequentially, but the instructions within them may reorder, like when multithread execute the code that is inside Method1()
..
Upvotes: 2
Reputation: 456477
The ECMA-335 spec is a lot weaker than what the CLR (and every other implementation) uses.
I remember reading (heresay) about Microsoft's first attempt to port to IA-64, using a weaker memory model. They had so much of their own code depending on the double-checked locking idiom (which is broken under the weaker memory model), that they just implemented the stronger model on that platform.
Joe Duffy has a great post summarizing the (actual) CLR memory model for us mere mortals. There's also a link to an MSDN article that explains in more detail how the CLR differs from ECMA-335.
I don't believe it's an issue in practice; just assume the CLR memory model, since everyone else does. No one would create a weak implementation at this point, since most code would simply break.
Upvotes: 2