Antoine Morrier
Antoine Morrier

Reputation: 4078

relaxed ordering and inter thread visibility

I learnt from relaxed ordering as a signal that a store on an atomic variable should be visible to other thread in a "within a reasonnable amount of time".

That say, I am pretty sure it should happen in a very short time (some nano second ?). However, I don't want to rely on "within a reasonnable amount of time".

So, here is some code :

std::atomic_bool canBegin{false};
void functionThatWillBeLaunchedInThreadA() {
    if(canBegin.load(std::memory_order_relaxed))
        produceData();
}

void functionThatWillBeLaunchedInThreadB() {
    canBegin.store(true, std::memory_order_relaxed);
}

Thread A and B are within a kind of ThreadPool, so there is no creation of thread or whatsoever in this problem. I don't need to protect any data, so acquire / consume / release ordering on atomic store/load are not needed here (I think?).

We know for sure that the functionThatWillBeLaunchedInThreadAfunction will be launched AFTER the end of the functionThatWillBeLaunchedInThreadB.

However, in such a code, we don't have any guarantee that the store will be visible in the thread A, so the thread A can read a stale value (false).

Here are some solution I think about.

Solution 1 : Use volatility

Just declare volatile std::atomic_bool canBegin{false}; Here the volatileness guarantee us that we will not see stale value.

Solution 2 : Use mutex or spinlock

Here the idea is to protect the canBegin access via a mutex / spinlock that guarantee via a release/acquire ordering that we will not see a stale value. I don't need canGo to be an atomic either.

Solution 3 : not sure at all, but memory fence?

Maybe this code will not work, so, tell me :).

bool canGo{false}; // not an atomic value now
// in thread A
std::atomic_thread_fence(std::memory_order_acquire);
if(canGo) produceData();

// in thread B
canGo = true;
std::atomic_thread_fence(std::memory_order_release);

On cpp reference, for this case, it is write that :

all non-atomic and relaxed atomic stores that are sequenced-before FB in thread B will happen-before all non-atomic and relaxed atomic loads from the same locations made in thread A after FA

Which solution would you use and why?

Upvotes: 2

Views: 761

Answers (2)

Peter Cordes
Peter Cordes

Reputation: 363999

There's nothing you can do to make a store visible to other threads any sooner. See If I don't use fences, how long could it take a core to see another core's writes? - barriers don't speed up visibility to other cores, they just make this core wait until that's happened.

The store part of an RMW is not different from a pure store for this, either.

(Certainly on x86; not totally sure about other ISAs, where a relaxed LL/SC might possibly get special treatment from the store buffer, possibly being more likely to commit before other stores if this core can get exclusive ownership of the cache line. But I think it still would have to retire from out-of-order execution so the core knows it's not speculative.)

Anthony's answer that was linked in comment is misleading; as I commented there:

If the RMW runs before the other thread's store commits to cache, it doesn't see the value, just like if it was a pure load. Does that mean "stale"? No, it just means that the store hasn't happened yet.

The only reason RMWs need a guarantee about "latest" value is that they're inherently serializing operations on that memory location. This is what you need if you want 100 unsynchronized fetch_add operations to not step on each other and be equivalent to += 100, but otherwise best-effort / latest-available value is fine, and that's what you get from a normal atomic load.

If you require instant visibility of results (a nanosecond or so), that's only possible within a single thread, like x = y; x += z;


Also note, the C / C++ standard requirement (actually just a note) to make stores visible in a reasonable amount of time is in addition to the requirements on ordering of operations. It doesn't mean seq_cst store visibility can be delayed until after later loads. All seq_cst operations happen in some interleaving of program order across all threads.

On real-world C++ implementations, the visibility time is entirely up to hardware inter-core latency. But the C++ standard is abstract, and could in theory be implemented on a CPU that required manual flushing to make stores visible to other threads. Then it would be up to the compiler to not be lazy and defer that for "too long".


volatile atomic<T> is useless; compilers already don't optimize atomic<T>, so every atomic access done by the abstract machine will already happen in the asm. (Why don't compilers merge redundant std::atomic writes?). That's all that volatile does, so volatile atomic<T> compiles to the same asm as atomic<T> for anything you can with the atomic.

Defining "stale" is a problem because separate threads running on separate cores can't see each other's actions instantly. It takes tens of nanoseconds on modern hardware to see a store from another thread.

But you can't read "stale" values from cache; that's impossible because real CPUs have coherent caches. (That's why volatile int could be used to roll your own atomics before C++11, but is no longer useful.) You may need an ordering stronger than relaxed to get the semantics you want as far as one value being older than another (i.e. "reordering", not "stale"). But for a single value, if you don't see a store, that means your load executed before the other core took exclusive ownership of the cache line in order to commit its store. i.e. that the store hasn't truly happened yet.

In the formal ISO C++ rules, there are guarantees about what value you're allowed to see which effectively give you the guarantees you'd expect from cache coherency for a single object, like that after a reader sees a store, further loads in this thread won't see some older store and then eventually back to the newest store. (https://eel.is/c++draft/intro.multithread#intro.races-19).

(Note for 2 writers + 2 readers with non-seq_cst operations, it's possible for the readers to disagree about the order in which the stores happened. This is called IRIW reordering, but most hardware can't do it; only some PowerPC. Will two atomic writes to different locations in different threads always be seen in the same order by other threads? - so it's not always quite as simple as "the store hasn't happened yet", it can be visible to some threads before others. But it's still true that you can't speed up visibility, only for example slow down the readers so none of them see it via the "early" mechanism, i.e. with hwsync for the PowerPC loads to drain the store buffer first.)

Upvotes: 2

Humphrey Winnebago
Humphrey Winnebago

Reputation: 1682

We know for sure that the functionThatWillBeLaunchedInThreadAfunction will be launched AFTER the end of the functionThatWillBeLaunchedInThreadB.

First of all, if this is the case then it's likely that your task queue mechanism takes care of the necessary synchronization already.

On to the answer...

By far the simplest thing to do is acquire/release ordering. All the solutions you gave are worse.

std::atomic_bool canBegin{false};

void functionThatWillBeLaunchedInThreadA() {
    if(canBegin.load(std::memory_order_acquire))
        produceData();
}

void functionThatWillBeLaunchedInThreadB() {
    canBegin.store(true, std::memory_order_release);
}

By the way, shouldn't this be a while loop?

void functionThatWillBeLaunchedInThreadA() {
    while (!canBegin.load(std::memory_order_acquire))
    { }
    produceData();
}

I don't need to protect any data, so acquire / consume / release ordering on atomic store/load are not needed here (I think?)

In this case, the ordering is required to keep the compiler/CPU/memory subsystem from ordering the canBegin store true before the previous reads/writes have completed. And it should actually stall the CPU until it can be guaranteed that every write that comes before in program order will propagate before the store to canBegin. On the load side it prevents memory from being read/written before canBegin is read as true.

However, in such a code, we don't have any guarantee that the store will be visible in the thread A, so the thread A can read a stale value (false).

You said yourself:

a store on an atomic variable should be visible to other thread in a "within a reasonnable amount of time".

Even with relaxed memory order, a write is guaranteed to eventually reach the other cores and all cores will eventually agree on any given variable's store history, so there are no stale values. There are only values that haven't propagated yet. What's "relaxed" about it is the store order in relation to other variables. Thus, memory_order_relaxed solves the stale read problem (but doesn't address the ordering required as discussed above).

Don't use volatile. It doesn't provide all the guarantees required of atomics in the C++ memory model, so using it would be undefined behavior. See https://en.cppreference.com/w/cpp/atomic/memory_order#Relaxed_ordering at the bottom to read about it.

You could use a mutex or spinlock, but a mutex operation is much more expensive than a lock-free std::atomic acquire-load/release-store. A spinlock will do at least one atomic read-modify-write operation...and possibly many. A mutex is definitely overkill. But both have the benefit of simplicity in the C++ source. Most people know how to use locks so it's easier to demonstrate correctness.

A memory fence will also work but your fences are in the wrong spot (it's counter-intuitive) and the inter-thread communication variable should be std::atomic. (Careful when playing these games...! It's easy to get undefined behavior) Relaxed ordering is ok thanks to the fences.

std::atomic<bool> canGo{false}; // MUST be atomic

// in thread A
if(canGo.load(std::memory_order_relaxed))
{
    std::atomic_thread_fence(std::memory_order_acquire);
    produceData();
}

// in thread B
std::atomic_thread_fence(std::memory_order_release);
canGo.store(true, memory_order_relaxed);

The memory fences are actually more strict than acquire/release ordering on the std::atomicload/store so this gains nothing and could be more expensive.

It seems like you really want to avoid overhead with your signaling mechanism. This is exactly what the std::atomic acquire/release semantics were invented for! You are worrying too much about stale values. Yes, an atomic RMW will give you the "latest" value, but they're also very expensive operations themselves. I want to give you an idea of how fast acquire/release is. It's most likely that you're targeting x86. x86 has total store order and word-sized loads/stores are atomic, so an load acquire compiles to just a regular load and and a release store compiles to a regular store. So it turns out that almost everything in this long post will probably compile to exactly the same code anyway.

Upvotes: 1

Related Questions