Reputation: 95
If I split a factorial program in 2 threads. One calculates the factorial of even numbers upto 100 and the other calculates for the odd numbers. What performance improvements can I expect in the execution time on a single core processor?
Upvotes: 2
Views: 3179
Reputation: 490178
Multi-threading will typically improve speed when/if it allows concurrent use of resources, some of which would otherwise be sitting idle at least part of the time.
For example, if you have a process that reads some data from disk, does a lot of processing on it, then writes results back to disk, you may well be able to gain something from doing disk I/O and computation (more or less) concurrently--even on a single core system.
Even in cases like these, you're typically suffering at least a little slow-down in each part of the program individually. Let's assume, however, that you're doing enough processing that it takes about the same length of time as the total of reading and writing.
Disk I/O uses one set of resources almost exclusively, and computation uses a different set of resources, also almost exclusively. Using two threads might add, say, 1% of time of its own, but by doing the Disk I/O and computation concurrently, you get double the speed, minus 1% overhead, so your program takes about 51% as long to run multi-threaded as it would single-threaded.
In your case, however, you're talking about two threads that do exactly the same things, so they compete for use of the exact same resources. This means you've done nothing to speed up the computation itself, and you've added at least some overhead to switch between execution of the two threads. Ergo, the multi-threaded code will almost certainly run slower than the single-threaded code (usually not drastically slower, but a little slower anyway).
Upvotes: 5
Reputation: 27125
You can not expect any performance improvement. In fact, you should expect the program to run significantly slower than the single-threaded version.
Multithreading generally makes programs run slower, not faster. We are willing to pay that penalty for two reasons:
1) We can use multi-threading as a way to use the processors of a multi-core machine in parallel, and thereby speed up a big computation.
I know, I said threads make things slower. Well, that's true. If you implement a parallel computation using two threads and two CPU cores, it will run slower than twice as fast as the single-threaded version, but hopefully, still will be faster than what one thread running on one core can do. If not, then maybe four threads on four cores or eight threads on eight cores will be faster than the single-threaded version.
2) We can use multi-threading to simplify the design of real-time and soft real-time programs. Real-time and soft real-time programs often must respond to several (or many) different un-synchronized inputs. Having a different thread to wait on each input can make the program easier to read than an event driven program that polls each different input in a big loop.
It's easier to read because the state variables associated with each independent input can be kept in local variables, which is a familiar idiom for most programmers. Of course, easier to read does not mean easier to write... without subtle bugs, but that's a topic for a different day.
Upvotes: 11