xcrypt
xcrypt

Reputation: 3366

How can multithreading speed up an application (when threads can't run concurrently)?

I'm learning about multithreading, but after reading some tutorials I'm sort of confused. I don't understand how multithreading can speed up an application.

By intuition, I would say multithreading slows down an application, because you constantly have to wait for those semaphores.

How and when can multithreading speed up an application, when threads can't run concurrently?

Upvotes: 13

Views: 13248

Answers (9)

Carl
Carl

Reputation: 44438

Think of threads as "things happening at the same time".

Once you think of them that way, then it doesn't matter if multiple threads are running on a single or multi-core machine. The idea is that you have more than one code path that executes simultaneously.

Now, if we look at a single core machine, then there can only ever be one thread executing at a time as you point out. However, if you think of each thread as a context, there can be several happening: handing input, updating the display, handling network communication, doing background tasks, etc. So yes, on a single core machine, multi-threading will be slower. However, that's not the point. The point is that your application is able to handle multiple activities simulteneously.

Finally, when you do move from a single core machine to one with multiple cores, if you've threaded yur application properly, those contexts have a much better chance of truly running simultaneously.

Upvotes: 0

Endophage
Endophage

Reputation: 21463

One of the most important uses of multithreading is in GUI programming. If you only had a single thread, what happens when you click a button? You would have to wait for whatever action that button fired to complete before control returned to the GUI. To put that in context. If your browser only ran in a single thread and you wanted to download say, a Linux ISO, you're entire browser would be unusable for the duration of the download as the single thread would be taken up with the download and wouldn't be available to respond to user actions. You couldn't even cancel the download.

By using multiple threads, you can continue using your browser while the download occurs in the background.

There are plenty of other uses that can speed up a program. For example, searching a large dataset. You can divide it up into chunks and each thread can search a chunk. You can then join on those threads completing and collect the results.

Also, semaphores aren't always necessary. It depends on what you're doing. If you have multiple threads consuming tasks from a single work queue, you want to make sure a job is removed from the queue before another thread can request a job so that you're not assigning the same work to 2 threads. In that case you use semaphores to make your work queue "thread safe". On the other hand, hootsuite or one of those other social media desktop clients could (don't know if they do) run a thread for each platform you're connected to so that you can fetch updates from multiple platforms in parallel.

Upvotes: 2

Daniel Mošmondor
Daniel Mošmondor

Reputation: 19956

Not everything happens on CPU. Imagine a computer that doesn't have threads. That computer will waste extremely large amounts of time:

  • waiting for keyboard to respond
  • waiting for mouse to respond
  • waiting for hard drive to complete request
  • waiting for network packet to arrive from some destination

and so on. In fact, such a computer won't be able to do anything with the CPU, if a system is designed to be minimally interactive.

Same thing, at a lesser extent, applies to ONE process i.e. your application.

EDIT:

Before 'nice' kernels that run on 'nice' processors such as 286 and from there, OS-es (or primitive-OS-es) were simulating multithreading by handling interrupts. Even ZX Spectrum had interrupts to handle keyboard, for example (if I remember correctly).

Upvotes: 1

Davita
Davita

Reputation: 9114

In some cases, multi threading slows down the application because locking and context switching requires some cpu resource, but overall application performance would greatly improve when you target multi core or multi cpu machine, because the only way to distribute your code across cores/cpus is to use threads.

In single core machines, running a single task with multiple threads will surely cause slow down due to the fact mentioned above.

Another usage of threads is to keep ui responsive, imagine a scenario when you need to perform a heavy I/O operations, such as reading from a device, fetching data from network etc. if you perform those operations in main thread, your ui will be blocked while I/O operation is running. You can avoid ui blocking by running I/O operations in different thread. Probably, that was meant with "speeding up the application".

Upvotes: 1

Charles Brunet
Charles Brunet

Reputation: 23110

On a computer, many programs (or threads) share some resources. Suppose one thread is waiting for a specific resource (for example, it wants to write data to disk). OS can then switch to another tread to be able to continue computing using available resources. This is why it is often a good idea to put I/O operations on a separate thread, and also to put GUI in a separate thread.

Off course, multithreading will not gives you a perfect speedup, but it can help a little big increasing performances. It is even better on hyperthreading architectures, where some registers are duplicated to minimize impact of context switching.

Upvotes: 0

Corbin
Corbin

Reputation: 33437

The idea behind multithreading is to have as few blocking points as possible. In other words, if a thread has to constantly wait on another thread to finish something, then the benefit of threads is likely lost in that situation.

Obligatory link: http://en.wikipedia.org/wiki/Amdahl's_law

Also, as Mark Ransom said, if your hardware can't actually do more than 1 thing at once, then threads are really just logically running at the same time (by swapping) than actually running at the same time. That can still be useful in situations with IO blocking though.

Upvotes: 3

Fred Foo
Fred Foo

Reputation: 363487

because you constantly have to wait for those semaphores.

Only in a poorly-designed program or in one designed for parallel work on a single-processor machine. In a well-designed program, the threads do useful work in parallel in between the synchronization points, and enough of it to outweigh the overhead of synchronization.

Even without parallel (multicore/multiprocessor) processing, multithreading can be beneficial when the threads do blocking I/O. E.g., the good old CVSup programs used multithreading in the single-core era to make full use of network connections' duplex capabilities. While one thread was waiting for data to arrive over the link, another would be pushing data the other way. Due to network latency, both threads necessarily had to spend a lot of time waiting, during which the other threads could do useful work.

Upvotes: 4

Joachim Isaksson
Joachim Isaksson

Reputation: 180867

Two ways I can think of, the first of which is probably what you mean by "parallel threading".

  • If you have multiple CPUs or cores, they can work simultaneously if you're running multiple threads.
  • In the single core case, if your thread ends up waiting for (synchronous) I/O, let's say you call read() to read 100 MB from tape, another thread can get scheduled and get work done while you wait.

Upvotes: 11

Tommy
Tommy

Reputation: 100602

Removing 'parallel threading' from the concept of multithreading does make it pointless — if you don't allow the threads to execute at the same time then all you've got is one stream of processing that spends a lot of time hopping about in the OS's scheduler.

That the threads can operate in parallel is the whole performance gain. You should optimise your code so that semaphores are rarely used, if ever — you're right that they're expensive. A common approach is thread pooling and event loops; suppose you had 2,000 objects that you wanted to mutate, you'd push 2,000 associated tasks to the thread pool. The thread pool would ensure that the individual actions are performed on such threads as become available, when they become available. If it's then able to post an event into a defined event loop when the work is done, there are no explicit semaphores in your code at all.

Upvotes: 0

Related Questions