user331789
user331789

Reputation: 1

Threads vs processess: are the visualizations correct?

I have no background in Computer Science, but I have read some articles about multiprocessing and multi-threading, and would like to know if this is correct.

SCENARIO 1:HYPERTHREADING DISABLED

HYPERTHREADING ENABLED

Lets say I have 2 cores, 3 threads 'running' (competing?) per core, as shown in the picture (HYPER-THREADING DISABLED). Then I take a snapshot at some moment, and I observe, for example, that: Core 1 is running Thread 3. Core 2 is running Thread 5.

Are these declarations (and the picture) correct?

A) There are 6 threads running in concurrency.

B) There are 2 threads (3 and 5) (and processes) running in parallel.

SCENARIO 2:HYPERTHREADING ENABLED

HYPERTHREADING ENABLED

Lets say I have MULTI-THREADING ENABLED this time.

Are these declarations (and the picture) correct?

C) There are 12 threads running in concurrency.

D) There are 4 threads (3,5,7,12) (and processes) running in 'almost' parallel, in the vcpu?.

E) There are 2 threads (5,7) running 'strictlÿ́' in parallel?

Upvotes: 0

Views: 851

Answers (2)

Stephen C
Stephen C

Reputation: 718678

There are a couple of things that are wrong (or unrealistic) about your diagrams:

  1. A typical desktop or laptop has one processor chipset on its motherboard. With Intel and similar, the chipset consists of a CPU chip together with a "northbridge" chip and a "southbridge" chip.

    On a server class machine, the motherboard may actually have multiple CPU chips.

  2. A typical modern CPU chip will have more than one core; e.g. 2 or 4 on low-end chips, and up to 28 (for Intel) or 64 (for AMD) on high-end chips.

  3. Hyperthreading and VCPUs are different things.

    • Hyperthreading is Intel proprietary technology1 which allows one physical to at as two logical cores running two independent instructions streams in parallel. Essentially, the physical core has two sets of registers; i.e. 2 program counters, 2 stack pointers and so on. The instructions for both instruction streams share instruction execution pipelines, on-chip memory caches and so on. The net result is that for some instruction mixes (non-memory intensive) you get significantly better performance than if the instruction pipelines are dedicated to a single instruction stream. The operating system sees each hyperthread as if it was a dedicated core, albeit a bit slower.

    • VCPU or virtual CPU terminology used in cloud computing context. On a typical cloud computing server, the customer gets a virtual server that behaves like a regular single or multi-core computer. In reality, there will typically be many of these virtual servers on a compute node. Some special software called a hypervisor mediates access to the hardware devices (network interfaces, disks, etc) and allocates CPU resources according to demand. A VCPU is a virtual server's view of a core, and is mapped to a physical core by the hypervisor. (The accounting trick is that VCPUs are typically over committed; i.e. the sum of VCPUs is greater than the number of physical cores. This is fine ... unless the virtual servers all get busy at the same time.)

    In your diagram, you are using the term VCPU where the correct term would be hyperthread.

  4. Your diagram shows each core (or hyperthread) associated with a distinct group of threads. In reality, the mapping from cores to threads is more fluid. If a core is idle, the operating system is free to schedule any (runnable) thread to run on it. (Some operating systems allow you to tie a given thread to a specific core for performance reasons. It is rarely necessary to do this.)


Your observations about the first diagram are correct.

Your observations about the second diagram are slightly incorrect. As stated above the hyperthreads on a core share the execution pipelines. This means that they are effectively executing at the same time. There is no "almost parallel". As I said, above, it is simplest to think of a hyperthread as a core "that runs a bit slower".


1 - Intel was not the first computer to com up with this idea. For example, CDC mainframes used this idea in the 1960's to get 10 PPUs from a single core and 10 sets of registers. This was before the days of pipelined architectures.

Upvotes: 0

antpngl92
antpngl92

Reputation: 534

A process is an instance of a program running on a computer. The OS uses processes to maximize utilization, support multi-tasking, protection, etc. Processes are scheduled by the OS - time sharing the CPU. All processes have resources like memory pages, open files, and information that defines the state of a process - program counter, registers, stacks. In CS, concurrency is the ability of different parts or units of a program, algorithm or problem to be executed out-of-order or in a partial order, without affecting the final outcome. A "traditional process" is when a process is an OS abstraction to present what is needed to run a single program. There is NO concurrency within a "traditional process" with a single thread of execution. However, a "modern process" is one with multiple threads of execution. A thread is simply a sequential execution stream within a process. There is no protection between threads since they share the process resources. Multithreading is when a single program is made up of a number of different concurrent activities (threads of execution). There are a few concepts that need to be distinguished: Multiprocessing is whenwe have Multiple CPUs. Multiprogramming when the CPU executes multiple jobs or processes Multithreading is when the CPU executes multiple mhreads per Process So what does it mean to run two threads concurrently? The scheduler is free to run threads in any order and interleaving a FIFO or Random. It can choose to run each thread to completion or time-slice in big chunks or small chunks. A concurrent system supports more than one task by allowing all tasks to make progress. A parallel system can perform more than one task simultaneously. It is possible though, to have concurrency without parallelism. Uniprocessor systems provide the illusion of parallelism by rapidly switching between processes (well, actually, the CPU schedulers provide the illusion). Such processes were running concurrently, but not in parallel. enter image description here

enter image description here

Hyperthreading is Intel’s name for simultaneous multithreading. It basically means that one CPU core can work on two problems at the same time. It doesn’t mean that the CPU can do twice as much work. Just that it can ensure all its capacity is used by dealing with multiple simpler problems at once. To your OS, each real silicon CPU core looks like two, so it feeds each one work as if they were separate. Because so much of what a CPU does is not enough to work it to the maximum, hyperthreading makes sure you’re getting your money’s worth from that chip.

Upvotes: 1

Related Questions