Reputation: 5542
I was reading OReilly's iOS6 Programming Cookbook and am confused about something. Quoting from page 378, chapter 6 "Concurrency":
For any task that doesn’t involve the UI, you can use global concurrent queues in GCD. These allow either synchronous or asynchronous execution. But synchronous execution does not mean your program waits for the code to finish before continuing. It simply means that the concurrent queue will wait until your task has finished before it continues to the next block of code on the queue. When you put a block object on a concurrent queue, your own program always continues right away without waiting for the queue to execute the code. This is because concurrent queues, as their name implies, run their code on threads other than the main thread.
I bolded the text that intrigues me. I think it is false because as I've just learned today synchronous execution means precisely that the program waits for the code to finish before continuing.
Is this correct or how does it really work?
Upvotes: 2
Views: 1830
Reputation: 5542
The author is Vandad Nahavandipoor, I don't want to affect this guy's sales income, but all his books contain the same mistakes in the concurrency chapters:
http://www.amazon.com/Vandad-Nahavandipoor/e/B004JNSV7I/ref=sr_tc_2_rm?qid=1381231858&sr=8-2-ent
Which is ironic since he's got a 50-page book exactly on this subject.
People should STOP reading this guy's books.
Upvotes: 1
Reputation: 29886
How is this paragraph wrong? Let us count the ways:
For any task that doesn’t involve the UI, you can use global concurrent queues in GCD.
This is overly specific and inaccurate. Certain UI centric tasks, such as loading images, could be done off the main thread. This would be better said as "In most cases, don't interact with UIKit
classes except from the main thread," but there are exceptions (for instance, drawing to a UIGraphicsContext
is thread-safe as of iOS 4, IIRC, and drawing is a great example of a CPU intensive task that could be offloaded to a background thread.) FWIW, any work unit you can submit to a global concurrent queue you can also submit to a private concurrent queue too.
These allow either synchronous or asynchronous execution. But synchronous execution does not mean your program waits for the code to finish before continuing. It simply means that the concurrent queue will wait until your task has finished before it continues to the next block of code on the queue.
As iWasRobbed speculated, they appear to have conflated sync/async work submission with serial/concurrent queues. Synchronous execution does, by definition, mean that your program waits for the code to return before continuing. Asynchronous execution, by definition, means that your program does not wait. Similarly, serial queues only execute one submitted work unit at a time, executing each in FIFO order. Concurrent queues, private or global, in the general case (more in a second), schedule submitted blocks for execution, in the order in which they were enqueued, on one or more background threads. The number of background threads used is an opaque implementation detail.
When you put a block object on a concurrent queue, your own program always continues right away without waiting for the queue to execute the code.
Nope. Not true. Again, they're mixing up sync/async and serial/concurrent. I suspect what they're trying to say is: When you enqueue a block asynchronously, your own program always continues right away without waiting for the queue to execute the code.
This is because concurrent queues, as their name implies, run their code on threads other than the main thread.
This is also not correct. For instance, if you have a private concurrent queue that you are using to act as a reader/writer lock that protects some mutable state, if you dispatch_sync
to that queue from the main thread, your code will, in many cases, execute on the main thread.
Overall, this whole paragraph is really pretty horrible and misleading.
EDIT: I mentioned this in a comment on another answer, but it might be helpful to put it here for clarity. The concept of "Synchronous vs. Asynchronous dispatch" and the concept of "Serial vs. Concurrent queues" are largely orthogonal. You can dispatch work to any queue (serial or concurrent) in either a synchronous or asynchronous way. The sync/async dichotomy is primarily relevant to the "dispatch*er*" (in that it determines whether the dispatcher is blocked until completion of the block or not), whereas the Serial/Concurrent dichotomy is primarily relevant to the dispatch*ee* block (in that it determines whether the dispatchee is potentially executing concurrently with other blocks or not).
Upvotes: 4
Reputation: 46703
I think that bit of text is poorly written, but they are basically explaining the difference between execution on a serial queue with execution on a concurrent queue. A serial queue is run on one thread so it doesn't have a choice but to execute one task at a time, whereas a concurrent queue can use one or more threads.
Serial queue's execute one task after the next in the order in which they were put into the queue. Each task has to wait for the prior task to be executed before it can then be executed (i.e. synchronously).
In a concurrent queue, tasks can be run at the same time that other tasks are also run since they normally use multiple threads (i.e. asynchronously), but again they are still executed in the order they were enqueued and they can effectively be finished in any order. If you use NSOperation
, you can also set up dependencies on a concurrent queue to ensure that certain tasks are executed before other tasks are.
Upvotes: 3
Reputation: 63667
When you put a block object on a concurrent queue, your own program always continues right away without waiting for the queue to execute the code. This is because concurrent queues, as their name implies, run their code on threads other than the main thread.
I find it confusing, and the only explanation I can think of, is that she is talking about who blocks who. From man dispatch_sync
:
Conceptually, dispatch_sync() is a convenient wrapper around dispatch_async() with the addition of a semaphore to wait for completion of the block, and a wrapper around the block to signal its completion.
So execution returns to your code right away, but the next thing the dispatch_sync
does after queueing the block, is wait on a semaphore until the block is executed. Your code blocks because it chooses to.
The other way your code would block is when the queue chooses to run a block using your thread (the one from where you executed dispatch_sync). In this case, your code wouldn't recover control until the block is executed, so the check on the semaphore would always find the block is done.
Erica Sadun for sure knows better than me, so maybe I'm missing some nuance here, but this is my understanding.
Upvotes: 0