Reputation: 390
My expectation when unlocking a mutex was that the scheduler checks for other threads currently trying to lock this mutex and then execute one of those waiting threads. I wrote a test program (see code below) with 2 threads both trying to acquire the same mutex in a loop and do some work (sleep for 1ms). The difference is that one thread t1
waits a short while between unlocking and trying to reacquire the mutex and the other thread t2
doesn’t. I was expecting that both threads acquire the mutex about the same number of times. However, on windows t1
often acquires the mutex only a single time and the other thread hundreds of times. On linux the behavior is different and both threads get work done with t2
roughly twice as many. Why does t1
on windows almost never acquire the mutex? How do I have to modify the code so that it does?
Example code:
#include <iostream>
#include <thread>
#include <mutex>
#include <atomic>
using namespace std;
int main()
{
mutex m;
atomic<bool> go(false);
int t1Counter = 0, t2Counter = 0;
thread t1([&] {
while(!go);
while(go) {
this_thread::sleep_for(100us);
lock_guard<mutex> lg(m);
this_thread::sleep_for(1ms);
++t1Counter;
}
});
thread t2([&] {
while(!go);
while(go) {
lock_guard<mutex> lg(m);
this_thread::sleep_for(1ms);
++t2Counter;
}
});
go = true;
this_thread::sleep_for(1s);
go = false;
t1.join();
t2.join();
cout << t1Counter << " " << t2Counter << endl;
}
Upvotes: 2
Views: 533
Reputation: 12485
On Windows, std::mutex
is implemented using a slim reader/writer lock. This lock implementation is not fair (meaning that it provides no guarantees about the order in which waiting threads will acquire the lock). Windows moved away from fair locks some time ago because Microsoft deemed lock convoys a more serious problem than thread starvation.
You can read more about slim reader/writer locks on Microsoft docs: Slim Reader/Writer (SRW) Locks
Joe Duffy has also blogged about the fairness versus lock convoy issue: Anti-convoy locks in Windows Server 2003 SP1 and Windows Vista
Upvotes: 4