Reputation: 14320
When you have a weak_ptr, and you do:
std::weak_ptr<int> wp; // Pretend it's assigned to a shared_ptr
if (!wp.expired()) // Equivalent to use_count() != 0
{ /* Here we're not 100% sure the raw pointer is accessible because a shared_ptr from another thread
could have been destroyed, and is in the process of decrementing the use_count from 1 to 0 and calling the deleter.*/
}
On the other hand:
std::weak_ptr<int> wp; // Pretend it's assigned to a shared_ptr
if (wp.expired()) Equivalent to use_count() == 0
{ /* Here we are sure that the raw pointer is invalid because of the interesting property that once the use_count is 0 it can't get incremented anymore */
}
The correct way to access a pointer if the resource is still valid is by calling lock, which returns a shared_ptr to the resource if it's valid, or an empty/null shared_ptr if the resource is not valid:
if (std::shared_ptr shp = wp.lock())
{// We know the resource is valid, we can use it
}
The thing I don't get is that the lock function is defined, according to the docs, as:
Effectively returns
expired() ? shared_ptr<T>() : shared_ptr<T>(*this)
, executed atomically.
So if expired() (equivalent to use_count() == 0) returns true, then we know for sure that the pointer can't be dereferenced, we get back an empty shared_ptr. On the other hand, if it's expired() returns false we get back a shared_ptr to the resource, but the thing is that, as I mentioned above, when expired() returns true we know the pointer's bad, but when it returns false it can be a false negative, but we get returned a shared_ptr. When we check that returned shared_ptr another thread could still be in the process of writing to the control block, right?
When the docs say "executed atomically" does it mean that calling lock() method uses a mutex every time you want to access the weak_ptr, or does it use an atomic operation? If so, what is the performance cost of this?
I don't think it uses atomic types for the counters because I heard about a trick that's often implemented which I think is a method of avoiding using atomic types/operations:
Adding one for all shared_ptr instances is just an optimization (saves one atomic increment/decrement when copying/assigning shared_ptr instances) Link.
Upvotes: 4
Views: 2542
Reputation: 180155
The relevant C++ language concept is sequencing. Typically, one operation is sequenced before or sequenced after another operation. "Executed atomically" in this case means that nothing is sequenced after expired()
but sequenced before shared_ptr<T>(*this)
.
In the end, this is really a compiler guarantee. The library of your implementation knows how to get the compiler to do the right thing. Locks in the standard library are similar; they must also make sure that the compiler does the right thing. It's really up to the library writer to decide exactly how the two are implemented.
The performance is likely implementation-dependent, but this is a pretty common class. That means implementors will care about performance.
Upvotes: 1