Reputation: 4449
I verified this with two applications using OpenGL using a GQLWidget. If screenupdates are very frequent, so say 30 fps, and/or resolution is high, CPU usage of one of the cores skyrockets. I'm looking for a solution on how to fix this and/or verify if it happens on Windows as well.
Upvotes: 0
Views: 1844
Reputation: 13877
I have seen a few GL implementations that were terrible at minimising host CPU usage. There appear to be plenty of situations where the CPU will busy-wait while the GPU draws. Often simply turning on vertical sync in the card settings will cause your app to draw less often and still take up just as much CPU. Unfortunately there is little you can do about this yourself, save for limiting how often your app draws.
Upvotes: 0
Reputation: 24892
In my experience QGLWidget itself is a very efficient thin wrapper around GL and your windowing system; if you have high CPU usage using it, well chances are that you'd have high CPU usage using any other way of implementing an OpenGL app too.
If you have high CPU usage using OpenGL, chances are either:
The fact you mention display resolution as a factor rather suggests the former problem.
Upvotes: 6
Reputation: 26409
You need to get any profiler, profile your code and see where bottleneck is. Since your program eats CPU resources (and not GPU), this should be fairly easy. As far as I know, "AQTime 7 Standard"(windows) is currently available for free. Or you could use gprof - depending on your toolkit/platform.
One very possible scenario (aside from software OpenGL fallback) is that you use dynamic memory allocation too frequently or running debug build. Immediate mode could be a problem if you have 100000+ polygons per frame.
Upvotes: 1