user7303074
user7303074

Reputation:

Limiting FPS in C++

I'm currently making a game in which I would like to limit the frames per second but I'm having problems with that. Here's what I'm doing:

I'm getting the deltaTime through this method that is executed each frame:

void Time::calc_deltaTime() {
    double currentFrame = glfwGetTime();
    deltaTime = currentFrame - lastFrame;
    lastFrame = currentFrame;
}

deltaTime is having the value I would expect (around 0.012.... to 0.016...)

And than I'm using deltaTime to delay the frame through the Sleep windows function like this:

void Time::limitToMAXFPS() {

    if(1.0 / MAXFPS > deltaTime)
        Sleep((1.0 / MAXFPS - deltaTime) * 1000.0);
}

MAXFPS is equal to 60 and I'm multiplying by 1000 to convert seconds to milliseconds. Though everything seems correct I'm sill having more than 60 fps (I'm getting around 72 fps)

I also tried this method using while loop:

void Time::limitToMAXFPS() {

    double diff = 1.0 / MAXFPS - deltaTime;

    if(diff > 0) {

        double t = glfwGetTime( );

        while(glfwGetTime( ) - t < diff) { }

    }

}

But still I'm getting more than 60 fps, I'm still getting around 72 fps... Am I doing something wrong or is there a better way for doing this?

Upvotes: 3

Views: 14793

Answers (5)

Alawi
Alawi

Reputation: 21

I've recently started using glfw for a small side project I'm working on, and I've use std::chrono along side std::this_thread::sleep_until to achieve 60fps

auto start = std::chrono::steady_clock::now();
while(!glfwWindowShouldClose(window))
{
    ++frames;
    auto now = std::chrono::steady_clock::now();
    auto diff = now - start;
    auto end = now + std::chrono::milliseconds(16);
    if(diff >= std::chrono::seconds(1))
    {
        start = now;
        std::cout << "FPS: " << frames << std::endl;
        frames = 0;

    }
    glfwPollEvents();

    processTransition(countit);
    render.TickTok();
    render.RenderBackground();
    render.RenderCovers(countit);

    std::this_thread::sleep_until(end);
    glfwSwapBuffers(window);
}

to add you can easily adjust FPS preference by adjusting end. now with that said, I know glfw was limited to 60fps but I had to disable the limit with glfwSwapInterval(0); just before the while loop.

Upvotes: 2

Jacques Nel
Jacques Nel

Reputation: 621

How important is it that you return cycles back to the CPU? To me, it seems like a bad idea to use sleep at all. Someone please correct me if I am wrong, but I think sleep functions should be avoided.

Why not simply use an infinite loop that executes if more than a certain time interval has passed. Try:

const double maxFPS = 60.0;
const double maxPeriod = 1.0 / maxFPS;

// approx ~ 16.666 ms

bool running = true;
double lastTime = 0.0;

while( running ) {
    double time = glfwGetTime();
    double deltaTime = time - lastTime;

    if( deltaTime >= maxPeriod ) {
        lastTime = time;
        // code here gets called with max FPS
    }
}

Last time that I used GLFW, it seemed to self-limit to 60 fps anyway. If you are doing anything high performance orientated (game or 3D graphics), avoid anything that sleeps, unless you wanna use multithreading.

Upvotes: 5

user7303074
user7303074

Reputation:

I've given up of trying to limit the fps like this... As you said Windows is very inconsistent with Sleep. My fps average is being always 64 fps and not 60. The problem is that Sleep takes as argument an integer (or long integer) so I was casting it with static_cast. But I need to pass to it as a double. 16 milliseconds each frame is different from 16.6666... That's probably the cause of this extra 4 fps (so I think).

I also tried :

std::this_thread::sleep_for(std::chrono::milliseconds(static_cast<long>(1.0 / MAXFPS - deltaTime) * 1000.0)));

and the same thing is happening with sleep_for. Then I tried passing the decimal value remaining from the milliseconds to chrono::microseconds and chrono::nanoseconds using them 3 together to get a better precision but guess what I still get the freaking 64 fps.

Another weird thing is in the expression (1.0 / MAXFPS - deltaTime) * 1000.0) sometimes (Yes, this is completely random) when I change 1000.0 to a const integer making the expression become (1.0 / MAXFPS - deltaTime) * 1000) my fps simply jumps to 74 for some reason, while the expression is completely equal to each other and nothing should happen. Both of them are double expressions I don't think is happening any type promotion here.

So I decided to force the V-sync through the function wglSwapIntervalEXT(1); in order to avoid screen tearing. And then I'm gonna use that method of multiplying deltaTime with every value that might very depending on the speed of the computer executing my game. It's gonna be a pain because I might forget to multiply some value and not noticing it on my own computer creating inconsistency, but I see no other way... Thank you all for the help though.

Upvotes: 2

Sven Nilsson
Sven Nilsson

Reputation: 1869

Sleep can be very inaccurate. A common phenomenon seen is that the actual time slept has a resolution of 14-15 milliseconds, which gives you a frame rate of ~70.

Is Sleep() inaccurate?

Upvotes: 3

doron
doron

Reputation: 28872

Are you sure your Sleep function accept floating point values. If it only accepts int, your sleep will be a Sleep(0) which will explain your issue.

Upvotes: -1

Related Questions