Nereid Regulus
Nereid Regulus

Reputation: 132

Cache efficiency with static member in thread

I'm currently making an application with multiple worker threads, running in parallel. The main part of the program is executed before the workers, and each workers are put to sleep when they have finished their tasks:

MainLoop()
{
    // ...

    SoundManager::PlaySound("sound1.mp3");  // Add a sound to be played, store the sound in a list in SoundManager
    SoundManager::PlaySound("sound2.mp3");
    SoundManager::PlaySound("sound3.mp3");

    // ...

    SoundThreadWorker.RunJob(); // Wake up thread and play every sound pushed in SoundManager

    // Running other threads

    SoundThreadWorker.WaitForFinish();  // Wait until the thread have finished its tasks, thread is put to sleep(but not closed)

    // Waiting other threads

    // ...
}

// In SoundThreadWorker class, running in a different thread from the main loop
RunJob()
{
    SoundManager::PlayAllSound();   // Play all sound stored in SoundManager
}

In this case, the static variable storing all sounds should be safe because no sound are added when the thread is running.

Is this cache efficient?

I have read here that: https://www.agner.org/optimize/optimizing_cpp.pdf

"The different threads need separate storage. No function or class that is used by multiple threads should rely on static or global variables. (See thread-local storage p. 28) The threads have each their stack. This can cause cache contentions if the threads share the same cache."

I have a hard time understand how static variable are stored in cache, and how they are used by each thread. Do I have two instance of SoundManager in cache, since thread does not share their stack? Do I need to create a shared memory to avoid this problem?

Upvotes: 3

Views: 216

Answers (1)

Omnifarious
Omnifarious

Reputation: 56088

That passage is about memory that is changed, not about memory that remains constant. Sharing constants between threads is fine.

When you have multiple CPUs each updating the same place, they have to be sending their changes back and forth to each other all the time. This results in contention for 'owning' a particular piece of memory.

Often the ownership isn't explicit. But when one CPU tells all the others that a particular cache line needs to be invalidated because it just changed something there, then all the other CPUs have to evict the value from their caches. This has the effect of the CPU to last modify a piece of memory effectively 'owning' the cache line it was in.

And, again, this is only an issue for things that are changed.

Also, the view of memory and cache that I gave you is rather simplistic. Please don't use it when reasoning about the thread safety of a particular piece of code. It's sufficient to understand why multiple CPUs updating the same piece of memory is bad for your cache, but it's not sufficient for understanding which CPU's version of a particular memory location ends up being used by the others.

A memory location that doesn't change during the lifetime of a thread being used by multiple threads will result in that memory location appearing in multiple CPU caches. But this isn't a problem. Nor is it a problem for a particular memory location that doesn't change to be stored in the L2 and L3 caches that are shared between CPUs.

Upvotes: 2

Related Questions