Reputation: 1566
I understand that double check locking is broken for singleton lazy initialization:
// SingletonType* singleton;
// std::mutex mtx;
SingletonType* get()
{
if(singleton == nullptr){
lock_guard _(mtx);
if(singleton == nullptr) {
singleton = new SingleTonType();
}
}
return singleton;
}
The above is broken because instructions may be re-arranged, so pointer assignment to singleton may happen before or during SingletonType constructor call, thus when thread1 acquires the lock and initialize, thread2 may see singleton no longer null while construction for the new SingleTon()
has not been completed, leading to undefined behavior.
With this understanding, I think double-check-lock broken is only broken for the singleton initialization case. For example, I think the following usage is safe. Is this understanding correct?
// int id = 0;
// int threshold = 1000;
// std::mutex mtx;
void increment()
{
int local = __atomic_fetch_add(&id, 1l, __ATOMIC_RELAXED);
if(local >= threshold) {
const std::lock_guard<std::mutex> _(mtx);
if(local >= threshold) {
// Do periodic job every 1000 id interval
threshold += 1000;
}
}
}
Upvotes: 3
Views: 150
Reputation: 40941
The only undefined behaviour here is that you read from a pointer and you write to it on a different thread without synchronisation. Now this might be fine with most pointers (especially if writing to a pointer is atomic), but you can easily be explicit:
std::atomic<SingletonType*> singleton;
std::mutex mtx;
SingletonType* get()
{
SingletonType* result = singleton.load(std::memory_order_relaxed);
if (!result) {
std::scoped_lock _(mtx);
result = singleton.load(std::memory_order_relaxed);
if (!result) {
result = new SingletonType();
singleton.store(result, std::memory_order_relaxed);
}
}
return result;
}
// Or with gcc builtins
SingletonType* singleton;
std::mutex mtx;
SingletonType* get()
{
SingletonType* result;
__atomic_load(&singleton, &result, __ATOMIC_RELAXED);
if (!result) {
std::scoped_lock _(mtx);
__atomic_load(&singleton, &result, __ATOMIC_RELAXED);
if (!result) {
result = new SingletonType();
__atomic_store(&singleton, &result, __ATOMIC_RELAXED);
}
}
return result;
}
However, there is an easier implementation:
SingletonType* get()
{
static SingletonType singleton;
return &singleton;
// Or if your class has a destructor
static SingletonType* singleton = new SingeltonType();
return singleton;
}
Which is typically also implemented as a double-checked lock (except on a hidden isSingletonConstructed
bool instead of whether the pointer is null)
Your initial worry seems to be that new SingletonType()
is equivalent to operator new(sizeof(SingletonType))
and then calling the constructor on the acquired storage, and a compiler might reorder calling the constructor to after the pointer is assigned. However, the compiler is not allowed to reorder the assignment, because that would have observable effects (like you noted about another thread returning singleton
while the constructor is still running).
Your increment
function can concurrently read and write to threshold
(In the first check in the double-checked lock and after acquiring the mutex and incrementing threshold += 1000
), so it could have race conditions.
You can fix it like so:
void increment()
{
int local = __atomic_fetch_add(&id, 1l, __ATOMIC_RELAXED);
if (local >= __atomic_load_n(&threshold, __ATOMIC_RELAXED)) {
const std::lock_guard<std::mutex> _(mtx);
int local_threshold = __atomic_load_n(&threshold, __ATOMIC_RELAXED);
if (local >= local_threshold) {
// Do periodic job every 1000 id interval
__atomic_store_n(&threshold, local_threshold + 1000, __ATOMIC_RELAXED);
}
}
}
But you don't really need atomics in this case, because local
will be every integer exactly once (as long as it is only modified via increment
), so you can instead do something like:
// int id = 0;
// constexpr int threshold = 1000;
// std::mutex mtx; // Don't need if jobs can run in parallel
void increment()
{
int local = __atomic_fetch_add(&id, 1l, __ATOMIC_RELAXED);
if (local == 0) return;
if (local % threshold == 0) {
const std::lock_guard<std::mutex> _(mtx);
// Do periodic job every 1000 id interval
}
}
Upvotes: 2