Reputation: 1915
I write a 3D model display program using OpenGL、Qt、C++.But I found something strange.That is the FPS(frame per second) in Release mode version is lower than debug mode version.Now I post their FPS:
The left is debug mode version and the right is release mode version:
The function I use to compute FPS is
void displayFPS()
{
static float framesPerSecond = 0.0f; // This will store our fps
static float lastTime = 0.0f; // This will hold the time from the last frame
float currentTime = GetTickCount() * 0.001f;
++framesPerSecond;
if( currentTime - lastTime > 1.0f )
{
framesPerSecond/=currentTime - lastTime;
char strFrameRate[256];
lastTime = currentTime;
sprintf_s(strFrameRate,256, "FPS : %f", framesPerSecond);
cout << strFrameRate << endl;
framesPerSecond = 0;
}
}
I wonder how could this happen?Shouldn't release mode be faster than debug mode?Could someone tell me why?
Upvotes: 0
Views: 269
Reputation: 4962
According to this the accuracy of GetTickCount() is much worse than a millisecond. It can even be as bad as 55ms! Use a more reliable method to measure time intervals, like this one:
#include <windows.h>
#include <cstdint>
typedef std::int64_t int64;
// get duration of a single "clock" in microseconds
double
get_clock_duration()
{
LARGE_INTEGER f;
QueryPerformanceFrequency(&f);
return 1000000.0 / double(f.QuadPart);
}
// get number of elapsed clocks from program start
int64
clocks()
{
LARGE_INTEGER t;
QueryPerformanceCounter(&t);
return t.QuadPart;
}
// convert a duration in clocks to a duration in milliseconds
double
milliseconds(int64 t)
{
return get_clock_duration() * double(t) * 0.001;
}
Upvotes: 1