Reputation: 2226
I'm reading chapter 16 — Shared-State Concurrency of The Rust Programming Language. It says:
Unfortunately,
Rc<T>
is not safe to share across threads. WhenRc<T>
manages the reference count, it adds to the count for each call to clone and subtracts from the count when each clone is dropped. But it doesn’t use any concurrency primitives to make sure that changes to the count can’t be interrupted by another thread.
What does it mean by
to make sure that changes to the count can’t be interrupted by another thread.
The only scenario where I think the changes to count are interrupted are if a thread is created, and somehow it panics/crashes, so the locker is never unlocked so the reference count never decreases. I can imagine Rust calling the destructors of every object in scope if a panic happens.
Can somebody clarify this for me?
Upvotes: 6
Views: 1463
Reputation: 300099
to make sure that changes to the count can’t be interrupted by another thread.
That's a very unfortunate phrasing, and is inaccurate. Interruptions are the least of our worries, really.
As far as concurrency is concerned, Rust's memory model is based on the memory model that C11 and C++11 adopted. If you want to learn more about memory models, I only recommend reading Preshing's article on Weak vs Strong memory models; I'll try to do the material justice in this answer.
Roughly speaking, a memory model is a model which specifies which operations can be reordered, and which cannot.
The reordering may occur:
In general, for performance reasons, reordering reads/writes is good. It allows more efficient utilization of the CPU and faster progress. However, the correctness of some algorithms depend on various threads observing events in a certain order... and thus sometimes some reads/writes should not be reordered. The memory model, and the memory orderings, are used to specify the exact constraints that the compiler and CPU should respect to correctly execute the algorithm.
Rc
?By ignoring increments.
In a weak memory model, if two different cores increment the counter, one of the increment may be ignored.
Imagine the following timeline, on a given thread, where CN means that the current count of owners is N, and C0 implies destruction.
T1 -- Create: C1 --- Clone: C2 -- Drop Clone: C1 --- Drop: C0.
Now, imagine that this thread shares Rc
:
T1 -- Create: C1 --- Clone: C2 ---------------C1---- Drop Clone: C0 --- Access **BOOM**.
\ /
T2 \_ Clone: C2 -- Drop Clone: C1 _/
^ ^
Only one increment was seen But both decrements are
Performance.
A strong memory model means a lot of unnecessary chatter between cores to synchronize cache lines -- chatter which adds latency to operations.
Weaker memory models allow less chatter, and thus less latency, meaning that programs can be executed faster or for less power.
Even on an hypothetical CPU where every read/write touches memory, it could still go wrong due to race conditions.
Specifically:
If you look at the AtomicXXX
types in Rust you'll notice the presence of a number of RMW (Read-Modify-Write) operations such as fetch_add
which atomically read, increment, and write.
The atomicity is important, as otherwise race conditions may occur.
Rc
?Even on an hypothetical CPU without any registers, where the increment/decrement would directly modify memory atomically, things could still go wrong.
The optimizer is allowed to assume that no other thread of execution is observing the writes to memory in the absence of memory ordering: doing so is Undefined Behavior, after all.
Therefore, an optimizer would be perfectly allowed to:
Rc
.If another thread drops the last other reference between (3) and (5), the counter will reach 0, the other thread will therefore drop the value inside.
Don't worry, you don't have to!
The Rust compiler has your back. Unless you whip out unsafe
, it will ensure that you do not accidentally introduce such race conditions.
As for understanding all that, there's a lot of literature out there. The exact effects of the ordering are documented, and for the bigger picture Preshing's really good, I heartily recommend their blog.
Upvotes: 18