Christian Neverdal
Christian Neverdal

Reputation: 5375

Comparing CPUs to GPUs - does it always make sense?

I was reading this article on GPU speed vs CPU speed. Since a CPU has a lot of responsibilities the GPU does not need to have, why do we even compare them like that in the first place? The quote "I can’t recall another time I’ve seen a company promote competitive benchmarks that are an order of magnitude slower" makes it sound like both Intel and NVIDIA are making GPUs.

Obviously, from a programmer's perspective, you wonder if porting your application to the GPU is worth your time and effort, and in that case a (fair) comparison is useful. But does it always make sense to compare them?

What I am after is a technical explanation of why it might be weird for Intel to promote their slower-than-NVIDIA-GPUs benchmarks, as Andy Keane seems to think.

Upvotes: 0

Views: 1295

Answers (2)

Michał Kosmulski
Michał Kosmulski

Reputation: 10020

The answer depends on the kind of code that is to be executed. GPUs are great for highly-parallelizable tasks or tasks which demand high memory bandwidth and there speedups may indeed be very high. However, they are not well suited for applications with lots of sequential operation or with complex control flow.

This means that the numbers hardly say anything unless you know very exactly what application they are benchmarking and how similar that use case would be to the actual code you would like to accelerate. Depending on the code you let it run, you GPU may be 100 times faster or 100 times slower than a CPU. Typical usage scenarios require a mix of different kinds of operations, so the general-purpose CPU is not dead yet and won't be for quite some time.

If you have a specific task to solve, it may well make sense to compare the performance of CPU vs GPU for that particular task. However, the results you get from the comparison will usually not translate directly to the results for a different benchmark.

Upvotes: 1

Patrick87
Patrick87

Reputation: 28302

Since a CPU has a lot of responsibilities the GPU does not need to have, why do we even compare them like that in the first place?

Well, if CPUs offered better performance than GPUs, people would use additional CPUs as coprocessors instead of using GPUs as coprocessors. These additional CPU coprocessors wouldn't necessarily have the same baggage as main host CPUs.

Obviously, from a programmer's perspective, you wonder if porting your application to the GPU is worth your time and effort, and in that case a (fair) comparison is useful. But does it always make sense to compare them?

I think it makes sense and is fair to compare them; they are both kinds of processors, after all, and knowing in what situations using one is beneficial or detrimental can be very useful information. The important thing to keep in mind is that there are situations where using a CPU is a far superior way to go, and situations where using a GPU makes much more sense. GPUs do not speed up every application.

What I am after is a technical explanation of why it might be weird for Intel to promote their slower-than-NVIDIA-GPUs benchmarks, as Andy Keane seems to think

It sounds like Intel didn't pick a particularly good application example if their only point was that CPUs aren't all that bad compared to GPUs. They might have picked examples where CPUs were indeed faster; where there was not enough data parallelism or arithmetic intensity, or SIMD program behavior, to make GPUs efficient. If you're picking a fractal generating program to show CPUs are only 14x slower than GPUs, you're being silly; you should be computing terms in a series, or running a parallel job with lots of branch divergence or completely different code being executed by each thread. Intel could have done better than 14x; NVIDIA knows it, researchers and practitioners know it, and the muppets that wrote the paper NVIDIA is mocking should have known it.

Upvotes: 2

Related Questions