user0002128
user0002128

Reputation: 2921

shared memory multi-threading and data accessing?

Regarding performance, assuming we get a block of data that will be freqenctly accessed by each threads, and these data are read-only, which means threads wont do anything besides reading the data.

Then is it benefitial to create one copy of these data (assuming the data there read-only) for each thread or not?

If the freqenently accessed data are shared by all threads (instead of one copy for each thread), wouldnt this increase the chance of these data will get properly cached?

Upvotes: 2

Views: 1533

Answers (1)

Alexey Kukanov
Alexey Kukanov

Reputation: 12784

One copy of read-only data per thread will not help you with caching; quite the opposite, it can hurt instead when threads execute on the same multicore (and possibly hyperthreaded) CPU and so share its cache, as in this case per-thread copies of the data may compete for limited cache space.

However, in case of a multi-CPU system, virtually all of which are NUMA nowadays, typically having per-CPU memory banks with access cost somewhat different between the "local" and "remote" memory, it can be beneficial to have a per-CPU copies of read-only data, placed in its local memory bank.

The memory mapping is controlled by OS, so if you take this road it makes sense to study NUMA-related behavior of your OS. For example, Linux uses first-touch memory allocation policy, which means memory mapping happens not at malloc but when the program accesses a memory page for the first time, and OS tries to allocate physical memory from the local bank.

And the usual performance motto applies: measure, don't guess.

Upvotes: 8

Related Questions