Reputation: 1460
Follow up question from Multi-core usage, threads, thread-pools.
Are threads moved from one core to another during their lifetime?
Of course. Imagine you have three threads running on a dualcore system. Show me a fair schedule that doesn't involve regularly moving threads between cores.
This is my first time on this site, so I didn't have enough rep to comment I guess. I decided to just make a new question referencing the one I wanted to comment on.
What is the process of selecting a core to move a thread to. Is it like the scheduler has a list of threads that need processing time and as one finishes it puts another one in?
Also I was wondering if there is a reference for the statement that threads are moved between cores at all. Or is it just considered "common knowlege"?
Thanks!
Upvotes: 9
Views: 1083
Reputation: 66793
Is it like the scheduler has a list of threads that need processing time and as one finishes it puts another one in?
Almost. What you describe is called cooperative multi-tasking, where the threads are expected to regularly yield execution back to the scheduler (e.g. by living only for a short while, or regularly calling Thread.Current.Sleep(0)). This is not how a modern consumer operating system works, because one rogue uncooperative thread can hog the CPU in such a system.
What happens instead is that at regular time intervals, a context switch occurs. The running thread, whether it likes it or not, is suspended. This involves storing a snapshot of the state of the CPU registers in memory. The kernel's scheduler then gets a chance to run and re-evaluates the situation, and may decide to let another thread run for a while. In this way slices of CPU time (measured in milliseconds or less) are given to the different threads. This is called pre-emptive multitasking.
When a system has more than one CPU or multiple CPU cores, the same thing happens for each core. Execution on each core is regularly suspended, and the scheduler decides which thread to run on it next. Since each CPU core has the same registers, the scheduler can and will move a thread around between cores while it attempts to fairly allocate time slices.
Upvotes: 2
Reputation: 35954
It's not like the thread is living on a particular core and that it is a process of moving it to another.
The operating system simply has a list of threads (and/or processes) that are ready to execute and will dispatch them on whatever core/cpu that happens to be available.
That said, any smart scheduler will try to schedule the thread on the same core as much as possible - simply to increase performance (data is more likely to be in that core's cache etc.)
Upvotes: 6
Reputation: 7889
Windows provides API to set thread affinity (i.e. to set CPUs this thread should be scheduled to). There won't be need for such API if thread always executes on one core.
Upvotes: 1
Reputation: 112915
MSDN has some articles that would probably help clarify some things: Scheduling Priorities and Multiple Processors.
Excerpt (Scheduling Priorities):
Threads are scheduled to run based on their scheduling priority. Each thread is assigned a scheduling priority. The priority levels range from zero (lowest priority) to 31 (highest priority). Only the zero-page thread can have a priority of zero. (The zero-page thread is a system thread responsible for zeroing any free pages when there are no other threads that need to run.)
The system treats all threads with the same priority as equal. The system assigns time slices in a round-robin fashion to all threads with the highest priority. If none of these threads are ready to run, the system assigns time slices in a round-robin fashion to all threads with the next highest priority. If a higher-priority thread becomes available to run, the system ceases to execute the lower-priority thread (without allowing it to finish using its time slice), and assigns a full time slice to the higher-priority thread.
And in regards to Multiple Processors:
Computers with multiple processors are typically designed for one of two architectures: non-uniform memory access (NUMA) or symmetric multiprocessing (SMP).
In a NUMA computer, each processor is closer to some parts of memory than others, making memory access faster for some parts of memory than other parts. Under the NUMA model, the system attempts to schedule threads on processors that are close to the memory being used. For more information about NUMA, see NUMA Support.
In an SMP computer, two or more identical processors or cores connect to a single shared main memory. Under the SMP model, any thread can be assigned to any processor. Therefore, scheduling threads on an SMP computer is similar to scheduling threads on a computer with a single processor. However, the scheduler has a pool of processors, so that it can schedule threads to run concurrently. Scheduling is still determined by thread priority, but it can be influenced by setting thread affinity and thread ideal processor, as discussed in this topic.
Upvotes: 3