Reputation: 311
I've been hearing so many conflicting answers, and now I don't know what to think. The agreed-upon knowledge is that for sharing memory in a thread safe manner in C++, it's required to use volatile together with std::mutex.
Based on that understanding, I've been writing code like this:
volatile bool ready = false;
std::condition_variable cv;
std::mutex mtx;
std::unique_lock<std::mutex> lckr{ mtx };
cv.wait(lckr, [&ready]() -> bool { return ready; });
But then I saw a lecture of Chandler Carruth in CppCon where he said (as a side note) that volatile is not required in this situation, and that I should basically never use volatile.
I then saw other answers in Stack Overflow that say that volatile should never be used, and it's not good enough and it doesn't guarantee atomicity at all.
Is Chandler Carruth correct? Are we both wrong?
Now I have 3 options:
I want to know if I'm allowed by the C++14 ISO standard to write code like this:
#include <condition_variable>
#include <mutex>
#include <iostream>
#include <future>
#include <functional>
struct sync_t
{
std::condition_variable cv;
std::mutex mtx;
bool ready{ false };
};
static void threaded_func(sync_t& sync)
{
std::lock_guard<std::mutex> lckr{ sync.mtx };
sync.ready = true;
std::cout << "Waking up main thread" << std::endl;
sync.cv.notify_one();
}
int main()
{
sync_t sync;
{
std::unique_lock<std::mutex> lckr{ sync.mtx };
sync.ready = false;
std::future<void> thread =
std::async(std::launch::async, threaded_func, std::ref(sync));
std::cout << "Preparing to sleep" << std::endl;
sync.cv.wait(lckr, [&sync]() -> bool { return sync.ready; });
thread.get();
}
std::cout << "Done program execution" << std::endl;
return 0;
}
and what happens when I make it:
volatile bool ready{ false };
and what happens when I make it:
std::atomic<bool> ready{ false };
Upvotes: 4
Views: 894
Reputation: 311
I now know more since I asked the question. The answer is that Chandler Carruth is correct that a regular bool (with a std::mutex) is enough. No need for atomic, and no need for volatile. "volatile" should only be used when dealing with signal handlers, like this:
volatile std::sig_atomic_t
Contrary to popular belief, the C++ compile isn't allowed to just "optimize-out" your read from a boolean when you're using a std::mutex. That's because locking the mutex acts as a "fence" and after the mutex is locked the compiler has to assume that variables could have changed. The compiler can still optimize-out local variables that it can prove not to have changed, but in my example of using a boolean as a predicate, I send the boolean by reference to another function:
std::future<void> thread =
std::async(std::launch::async, threaded_func, std::ref(sync));
The boolean exists inside of "sync" and therefore now the compiler isn't allowed to assume that the value stays the same. The compile can still store the boolean value in a register, but the moment I lock the std::mutex it'll be forced to reload the value because it could have changed. Of course (according to the standard) the function std::condition_variable::wait keeps the std::mutex locked when it returns, so it'll always be checking the boolean predicate when the std::mutex is locked, deeming the entire thing safe.
So in summary: volatile is not required for multithreading, ever. And std::mutex is enough.
The answer to my question is that the code I wrote with a plain boolean is safe, and all three options would also be safe. Using volatile would be safe, and using std::atomic would be safe. But using a regular boolean would be the most correct and efficient in this situation. In fact, if you have a lock (std::mutex) you never need std::atomic. It's important to note that if I didn't make an effort to use a std::mutex every time I read from the predicate then std::atomic would be required.
I took this knowledge from the various answers that were given to my question here, and I also tested this in Compiler Explorer with Clang13. It would be interesting to see proofs from the C++14 standard.
Upvotes: 1
Reputation: 182
volatile simply tell the compiler that someone may change this value even if you don't know who, for example, it may be some hardware, signal, or even other thread. A famous example is:
bool flag
foo()
{
flag = true;
while(flag)
{
}
}
optimized compiler will see that flag is true and since its only a normal global variable it can assume that no one can change it except the current thread, so the compiler may assume that flag is always true, and hence switch the while(flag)
to while(1)
to make an infinite loop.
but if you declare the flag variable as volatile, the compiler can't assume that only the current thread touches this value, so the code will remain the same.
now for your question, the volatile will help us notify the compiler that someone else may use this value, but it does not enough to multi-threading since it does not prevent a data race which is undefined behavior in c language, hence we need to declare the bool flag as std::atomic.
note that one of the things that the compiler understands from std::atomic declaration is that another thread may use this value so we can't make the optimization above.
for your example, as we explained, volatile is not enough, but you do not need std::atomic either since you have a lock, so if your lock works right then no other thread may touch the value when you are inside the critical section so std::atomic is redundant.
std::atomic is mainly for a critical section when all the critical section is because of atomic operations, so we can use std::atomic instead take a lock which is slower(its not always the case, it depends on the flow).
Upvotes: 0
Reputation: 51864
The volatile
qualifier has no required effect on access to an object from different threads – it only guarantees that no side-effects of modification in a single thread will be optimized-out by the compiler. From cppreference (bold emphasis mine):
- volatile object - an object whose type is volatile-qualified, or a subobject of a volatile object, or a mutable subobject of a const-volatile object. Every access (read or write operation, member function call, etc.) made through a glvalue expression of volatile-qualified type is treated as a visible side-effect for the purposes of optimization (that is, within a single thread of execution, volatile accesses cannot be optimized out or reordered with another visible side effect that is sequenced-before or sequenced-after the volatile access. This makes volatile objects suitable for communication with a signal handler, but not with another thread of execution, see std::memory_order). Any attempt to refer to a volatile object through a glvalue of non-volatile type (e.g. through a reference or pointer to non-volatile type) results in undefined behavior.
To prevent undefined behaviour when accessing an object from multiple threads, you should use a std::atomic
object. Again, from cppreference:
Each instantiation and full specialization of the std::atomic template defines an atomic type. If one thread writes to an atomic object while another thread reads from it, the behavior is well-defined (see memory model for details on data races).
In addition, accesses to atomic objects may establish inter-thread synchronization and order non-atomic memory accesses as specified by std::memory_order.
Upvotes: 5
Reputation: 6556
No, volatile is confusing keyword but it has nothing to do with concurrency unlike in C# or Java where it guarantees sequential consistency. Here its just a hint to the compiler not to optimise the variable.
Upvotes: 3