97mik
97mik

Reputation: 15

Where and why deadlock?

I have 2 concurrent queues:

let concurrentQueue = DispatchQueue(label: "test.concurrent", attributes: .concurrent)
let syncQueue = DispatchQueue(label: "test.sync", attributes: .concurrent)

And code:

for index in 1...65 {
    concurrentQueue.async {
        self.syncQueue.async(flags: .barrier) {
            print("WRITE \(index)")
        }
        
        self.syncQueue.sync {
            print("READ \(index)")
        }
    }
}

Outputs:

WRITE 1
READ 1

Why, where and how it gets deadlock?

With <65 iterations count everything is good.

Upvotes: 1

Views: 1109

Answers (2)

Rob
Rob

Reputation: 437592

This pattern (async writes with barrier, concurrent reads) is known as the “reader-writer” pattern. This particular multithreaded synchronization mechanism can deadlock in thread explosion scenarios.

In short, it deadlocks because:

  • You have “thread explosion”;

  • You have exhausted the worker thread pool, which only has 64 threads;

  • Your dispatched item has two potentially blocking calls, not only the sync, which obviously can block, but also the concurrent async (see next point); and

  • When you hit a dispatch, if there is not an available worker thread in the pool, it will wait until one is made available (even if dispatching asynchronously).

The key observation is that one should simply avoid unbridled thread explosion. Generally we reach for tools such as GCD's concurrentPerform (a parallel for loop which is constrained to the maximum number of CPU cores), operation queues (which can be controlled through judicious maxConcurrentOperationCount setting) or Swift concurrency (use its cooperative thread pool to control degree of concurrency, actors for synchronization, etc.).


While the reader-writer has intuitive appeal, in practice it simply introduces complexities (synchronization for multithreaded environment with yet another multithreaded mechanism, both of which are constrained by surprisingly small GCD worker thread pools), without many practical benefits. Benchmark it and you will see that it is negligibly faster than a simple serial GCD queue, and relatively much slower than lock-based approaches.

Upvotes: 2

ipmcc
ipmcc

Reputation: 29906

The < 65 observation immediately makes me think you're hitting the queue width limit, which is undocumented (for good reason), but widely understood to be 64. A while back, I wrote a very detailed answer about this queue width limit over here. You should check it out.

I do have some relevant ideas I can share:

The first thing I would recommend would be replacing the print() calls with something that does not trigger I/O. You could create a numReads variable and a numWrites variable, and then use something like an atomic compare and swap operation to increment them, then read them after the loop completes and make sure they're what you expected. See Swift Atomics over here. You could also just dispatch the print operations (async) to the main queue. That would also eliminate any I/O problems.

I'll also note here that by introducing a barrier block on every single iteration of the outer for loop, you're effectively making that queue serial, but with a non-deterministic order. At the very least, you're creating a ton of contention for it, which is suboptimal. Reader/Writer locks tend to make the most sense when there are many more reads than writes, but your code here has a 1:1 ratio of reads to writes. If that is what your real use case looks like, you should just use a serial queue (or a mutex) since it would achieve the same effect, but without the contention.

At the risk of being "that guy", can you maybe elaborate on what you're trying to accomplish here? If you're just trying to prove to yourself that reader/writer locks emulated using barrier blocks work, I can assure you that they do.

Upvotes: 0

Related Questions