Reputation: 11
I'm working on a college assignment for my computer architecture class and we have to run different benchmark tests on our personal computers to determine how different technologies affect its efficiency.
I'm using SiSoftware Sandra Lite 2021 for the benchmark tests (in the option Benchmark > Processor > Processor Arithmetic) and my CPU is a Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz.
First, I got the results by running the benchmark with all the options on (all extensions, multithreading, hyperthreading and so on). Then, I got the results with all the extensions disabled (SSE, AVX, FMA, AES, SHA...).
Here is the table with the mean average of three benchmark results for each different test:
Benchmark used | Extensions enabled | Extensions disabled |
---|---|---|
Dhrystone Integer Native | 113.47 GIPS (AVX2) | 83 GIPS (ALU) |
Whetstone Single-float Native | 83.32 GFLOPS (AVX/FMA) | 88.76 GFLOPS (FPU) |
Whetstone Double-float Native | 68.79 GFLOPS (AVX/FMA) | 82.38 GFLOPS (FPU) |
Here's the question: why do I get higher scores in the Whetstone benchmark when I disable all the extensions?
I do understand I get a lower score in the Dhrystone one when I disable extensions because a lot of them run on the SIMD (Single Instructions Multiple Data) principle. However, since a lot of extenions also help the CPU do floating point operations faster, I was expecting the same thing to happen in the Whetstone results.
Any idea why I got these results?
Thanks in advance.
I'm including the list of features my CPU supports, in case it's useful:
Upvotes: 1
Views: 251