Reputation: 43
Can someone explain why running this code multiple times gives very variable results? Unless I did something wrong, it should measure the time my system (MacOSX Sierra, Xcoe9.2) takes to run an empty for loop 1000 times.
#include <iostream>
#include <chrono>
void printstuff (){
for (int i = 0; i < 1000; ++i){
//empty loop
}
}
int main(int argc, const char * argv[]) {
auto time1 = std::chrono::high_resolution_clock::now();
printstuff();
auto time2 = std::chrono::high_resolution_clock::now();
std::cout << std::chrono::duration_cast<std::chrono::nanoseconds>(time2 - time1).count() << std::endl;
return 0;
}
Upvotes: 0
Views: 70
Reputation: 378
How are you compiling that code? Which flags are you using?
If you are using any level of optimization (using one of the optimization flags, see for example: https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html for gcc) then the function printstuff() is compiled to zero instructions, as you can see here: https://godbolt.org/z/gXq-mG
If you are not using any optimization flag, that loop will still execute too fast for you to measure anything (you are most likely measuring noise).
If you want to benchmark code I would recommend you Google Benchmark
Upvotes: 1