nobody
nobody

Reputation: 835

Is the point of multi-threading to increase your CPU usage?

I am very new to the concept of multi-threading, but to my understanding the whole point of multi-threading architecture (I learnt there is hardware multi-threading and software multi-threading; not that I completely understand the concept of each but I think I am talking about the hardware aspect here) is to "keep your CPU busy". This is for example to process another task when your current one is fetching your hard-disk for data import.

If I am correct, for a non multi-threading CPU, if it already has near 100% usage, then switching to a multi-threading one would not help you much. Am I correct?

I am sure my statement of the problem is full of inaccuracy but I hope I made myself understood.

Upvotes: 2

Views: 3661

Answers (2)

Margaret Bloom
Margaret Bloom

Reputation: 44068

You have been given the task to build a house, your team is composed by you, the supervisor, and a pool of workers.

When your boss came by to check on the progresses, what would you like him to see? One worker doing all the job and the other watching, or all the workers being busy?

You want to keep the worker busy giving them independent tasks, the more worker the harder this is. Furthermore there are some issues to take into account: worker A was given the task to building a wall, and it is building it. Before the wall gets too tall the concrete need to dry, so A spends a lot of time waiting.
During this wait they could help somewhere else.
Asking A to help somewhere else while they are in the building-the-wall step is pointless, they either need to decline or stop what they are doing.
One way or the other you won't get any benefit.


The workers are the equivalent of threads.
The house building is the equivalent of a multi-threaded process.
The A worker building the wall is the equivalent of CPU bounded process, a process that use a CPU as much as he need.
The concrete drying is the equivalent of an IO operation, it will complete itself without the need of any worker.
The A worker waiting for the concrete to dry is the equivalent of a IO bounded process, it mostly does nothing.
The A worker being always busy is the equivalent of an optimal scheduling/multi-threaded algorithm.


Now you are given the task to study a CS book, your team is composed by you, a student that cannot read, and a pool of readers.

How would you assign the readers? You can't make them read each chapter individually, as you can't listen to more than one person.

So even if you have a lot of worker you pick one and make them read the book sequentially.

Reading a book is an example of an inherently sequential problem that won't benefit from multi-threading, while building an house is an example of a very parallelizable problem.


Handling multiple workers is not easy: a worker A may starts a wall because they took a look at the concrete stash and realized there was enough of it but didn't claim it. Worker B needs some concrete and takes it from the stash, now A no longer has enough concrete.
This is the equivalent of a race condition: A is checking and using a resource in two different times (as two different, divisible, operations) and the result depends on the timing of the workers.


If you think of the CPU as "units that can do things" you will realize that having more "units that can do things" is better only if they are not standing there glazing at the infinity.
Hence all the literature about multi-threading.

Upvotes: 6

Alexey Guseynov
Alexey Guseynov

Reputation: 5304

I've originally misunderstood the question. Your statement is correct for software multi-threading when we parallelize some algorithm to make the computations in multiple threads. In this case if your CPU is already loaded with work you can't make it work faster by executing code in multiple threads. Moreover, you can even expect decrease in performance due to overhead of multi-threaded communications and context switches. But in modern world it is not easy to find a single core CPU (with exception of embedded applications). So in most cases you need to use threads to fully utilize CPU's computational ability.

But for hardware multi-threading situation is different because it is an absolutely different thing. CPU has circuits that perform arithmetic operations and circuits responsible for program flow. And now we make a trick: we double number of second circuits and make them share arithmetic operations circuits. Now two threads can execute different commands simultaneously: they can't do summation at the same time, but one can add numbers and second can divide something. And that's how performance is gained. So 100% load now is different load because you have enabled additional circuits in the CPU. Relative value is the same, but absolute performance is higher.

Upvotes: 1

Related Questions