user2345397
user2345397

Reputation: 301

Are there tools/ methods to objectively measure performance?

I'm writing a high performance application (a raytracer) in C++ using Visual Studio, and I just spent two days trying to root out a performance drop I witnessed after refactoring the code. The reason it took so long was because the performance drop was smaller than the normal variation in execution time I witnessed from run to run.

Not sure if this is normal, but sometimes the program may run at around 33fps pretty consistently, then if you close and rerun, it may run at 37fps. This means that in order to test any new change, I had to manually run and rerun until I witnessed peak performance (And this could require up to like 10 runs). Simply running it for some large number of frames, and measuring the time doesn't fix this variability. For example, if the program runs for 40 seconds on average, it will nevertheless vary by over 1-2 seconds, which makes this test nearly useless for detecting the 1 millisecond per frame performance loss I was dealing with.

Visual Studio's profiling tools also didn't help find this small of an issue, because they also were subject to variation, and in any case, its not necessarily going to tell me the exact offending line, so I have to test solutions, and the profiler is not very effective at confirming a proposed solution's efficacy.

I realize this all may sound like premature optimization, but I don't think it is because I'm optimizing only after finishing complete features; I'm just trying to monitor changes in performance regularly so that issues like the above don't slip in and just get added to the apparent cost of the new feature.

Anyways, my question is simply whether there's a way to objectively determine the "real" speed of an application, discounting the effect of variation. Or, failing that, how do developers deal with such issues? I doubt that my current process is the ideal one.

Upvotes: 2

Views: 339

Answers (1)

zerocukor287
zerocukor287

Reputation: 1073

There are lots of profilers for both c++ and openGL. For those who just need the links, here are they.

OpenGL debugger-profiler

C++ profilers but I recommend Google orbit because it has dark theme.

My eyes stopped at

Objectively measure performance

As you mentioned the speed varies from run to run because it's too complex system. It helps if the scope is small and it only tests some key algorithms. It worth to automatize and collect some reference data. As every scientist say one test is not a test, you should rely on regular tests with controlled environments.

And here comes some tricks that can be used to measure performance.

  • In the comments others said, an average based on several runs may help you. It softens the noise from the outside.
  • Process priority or processor affinity could help you control the environment. By giving low priority to other processes your program gains more resource.
  • Measuring the whole execution of a test and compare it against processor time. As several processes runs at the same time, processor time may differs from execution time.
  • Update your reference values if you do a software update. Perhaps one update comes with performance boost while other with security patch.
  • Give a performance range for your program instead of one specific number. Perhaps the temperature messed up your measurement and the clock speed was decreased.
  • If a test runs too fast to measure, execute the most critical part several times in a test case. Too fast depend on how accurate you can measure. On ms basis it's really hard to decide if a test executed in 2 ms instead of 1 ms is a failure or not. However, if executed 1000 times - 1033 ms compared to 1000 ms gives you better insight.
  • Only test what is the critical section. Set up the environment and start the stopwatch when everything is ready. The system startup could be another test.

Upvotes: 1

Related Questions