labotsirc
labotsirc

Reputation: 722

openmp theory vs in-practice efficiency?

As i increase the number of cores for an embarrasing parallel linear problem (a for loop where each iteration does lots of computation, all independant from the other iterations), the efficiency decreases (efficiency as Ts/(p*Tp) ) somehow linearly respect to the number of cores

i know that in practice thread scheduling, OS, and cache problems can slowdown an implementation a lot.

i can add that i do get speedup, and the problem in theory has linear speedup which in theory has efficiency 1 as p increases.

question then: How does the OS, thread scheduling, memory acceses, and other types of technical limitations affect the efficiency of the algorithm as the number of processors increases???? should it affect at all?

Upvotes: 0

Views: 409

Answers (3)

Hristo Iliev
Hristo Iliev

Reputation: 74385

Is your problem CPU bound or memory bound? What is your system architecture - an SMP or a NUMA? How much cache does your processors have? Do you bind your threads to cores or not? ...

There are too many parameters to be considered before anyone can answer your question. I would suggest that you use something like Intel VTune Amplifier or Oracle Collector/Analyzer in order to see where and what causes the increasing inefficiency.

Upvotes: 0

Benoir
Benoir

Reputation: 1244

You should look into strong scaling:

https://www.sharcnet.ca/help/index.php/Measuring_Parallel_Scaling_Performance#Strong_Scaling

You basically get diminishing returns as you add more cores to the problem because of all the factors you mentioned.

Upvotes: 1

duffymo
duffymo

Reputation: 308763

You might be thinking about something like Amdahl's Law, but the specifics of each case make it difficult to pin it down.

Upvotes: 3

Related Questions