Julian A.
Julian A.

Reputation: 11470

Thread locality

I have this statement, which came from Goetz's Java Concurrency In Practice:

Runtime overhead of threads due to context switching includes saving and restoring execution context, loss of locality, and CPU time spent scheduling threads instead of running them.

What is meant by "loss of locality"?

Upvotes: 8

Views: 551

Answers (2)

CaptainHastings
CaptainHastings

Reputation: 1597

Just to elaborate the point of "cache miss" made by JB Nizet.

As a thread runs on a core, it keeps recently used data in the L1/L2 cache which are local to the core. Modern processors typically read data from L1/L2 cache in about 5-7 ns.

When, after a pause (from being interrupted, put on wait queue etc) a thread runs again, it most likely will run on a different core. This means that the L1/L2 cache of this new core has no data related to the work that the thread was doing. It now needs to goto the main memory (which takes about 100 ns) to load data before proceeding to work.

There are ways to mitigate this issue by pinning threads to a specific core by using a thread affinity library.

Upvotes: 3

JB Nizet
JB Nizet

Reputation: 692033

When a thread works, it often reads data from memory and from disk. The data is often stored in contiguous or close locations in memory/on the disk (for example, when iterating over an array, or when reading the fields of an object). The hardware bets on that by loading blocks of memory into fast caches so that access to contiguous/close memory locations is faster.

When you have a high number of threads and you switch between them, those caches often need to be flushed and reloaded, which makes the code of a thread take more time than if it was executed all at once, without having to switch to other threads and come back later.

A bit like we humans need some time to get back to a task after being interrupted, find where we were, what we were doing, etc.

Upvotes: 10

Related Questions