Reputation: 187
I am learning the code of cppcoro project recently. And i have a question.
https://github.com/lewissbaker/cppcoro/blob/master/lib/async_auto_reset_event.cpp#L218 https://github.com/lewissbaker/cppcoro/blob/master/lib/async_auto_reset_event.cpp#L284
if (waiter->m_refCount.fetch_sub(1, std::memory_order_release) == 1) // #218
{
waiter->m_awaiter.resume();
}
using memory_order_release writting in line 218, m_refCount using memory_order_acquire flag can load the value correctly in line 284. That is OK. But fetch_sub is a RMW opertion. To read the modification in line 284 correctly, is there also need a memory_order_aquire flag? So i wonder why don't m_refCount use memory_order_acq_rel in line 218 and line 284?
return m_refCount.fetch_sub(1, std::memory_order_acquire) != 1; // #284
Thank you.
Upvotes: 1
Views: 587
Reputation: 26476
Because this is not how memory orders work.
We add a memory barrier to our atomic operation in order to achieve two things:
I wrote an answer here that explains these two points more clearly.
When a coroutine is suspended in one thread, and resumed in another thread, no additional synchronization is needed*, cppreference on coroutines says:
Note that because the coroutine is fully suspended before entering awaiter.await_suspend(), that function is free to transfer the coroutine handle across threads, with no additional synchronization.
As for reordering? the actual logic ( waiter->m_awaiter.resume();
) is surrounded by one big fat if statement. the compiler anyway can not reorder the resumption before the fetch_sub
, because then it ignores the role of the if-statement and breaks the logic of the code.
So, we don't need any other memory order here but relaxed. The fact that fetch_XXX
is a RMW operation means nothing - we use the right memory order for the right use-case.
If you like cppcoro, please try my own coroutine library, concurrencpp.
*A more correct statement is: no additional synchronization is needed besides the synchronization needed in order to pass a coroutine_handle
from one thread
to another.
Upvotes: 1
Reputation: 4703
The operations on the atomic variable itself will not cause data races either way.
std::memory_order_release
means that all modified cached data will be committed to shared memory / RAM. Memory order operations generate memory fences so other objects could be properly committed to / read from shared memory.
Upvotes: 0