matthias_buehlmann
matthias_buehlmann

Reputation: 5061

does wglGetCurrentContext sync GPU and CPU?

when programming with OpenGL, glGet functions should be avoided because they force the GPU and CPU to synchronize, does this also apply to the wgl function "wglGetCurrentContext" which obtains a handle to the current OpenGL context? if not, are there any other performance problems around wglGetCurrentContext?

Upvotes: 2

Views: 659

Answers (2)

Reto Koradi
Reto Koradi

Reputation: 54592

There is some misconception implied in your question:

glGet functions should be avoided because they force the GPU and CPU to synchronize

The first part is true. The second part is not. Most glGet*() call do not force a GPU/CPU synchronization. They only get state stored in the driver code, and do not involve the GPU at all.

There are some exceptions, which include the glGet*() calls that actually get data produced by the GPU. Typical examples include:

  • glGetBufferSubData(): Has to block if data is produced by the GPU (e.g. using transform feedback).
  • glGetTexImage(): Block if texture data is produced by GPU (e.g. if used as a render target).
  • glGetQueryObjectiv(..., GL_QUERY_RESULT, ...): Blocks if query has not finished.

Now, it's still true that you should avoid glGet*() calls where possible. Mainly for two reasons:

  • Most of them are unnecessary, since they query state that you set yourself, so you should already know what it is. And any unnecessary call is a waste.
  • They may cause synchronization between threads in multi-threaded driver implementations. So they may result in synchronization, but not with the GPU.

There are of course some good uses for glGet*() calls, for example:

  • To get implementation limits, like maximum texture sizes, etc. Call once during startup.
  • Calls like glGetUniformLocation() and glGetAttribLocation(). Make sure that you only call them once after shader linkage. Even though these are also avoidable by using qualifiers in the shader code at least in recent GLSL versions.
  • glGetError(), during debugging only.

As for wglGetCurrentContext(), I doubt that it would be very expensive. The current context is typically stored in some kind of thread local storage, which can be accessed very efficiently.

Still, I don't see why calling it would be necessary. If you need the context again later, you can store it away when you create it. And if that's not possible for some reason, you can call wglGetCurrentContext() once, and store the result. There definitely shouldn't be a need to call it repeatedly.

Upvotes: 4

Aiden Koss
Aiden Koss

Reputation: 111

All of the WGL functions vary widely in there effects on the performance characteristics depending on the vendor and driver.

I don't expect that wglGetCurrentContext would be an especially performance hogging call (unless you do it a ton of times). Since the WGL functions are generally divorced from the GL context's state vector.

That being said, SETTING the current context will cause all manner of syncing between contexts, often in deeply undocumented ways. I've dealt with a couple of AMD and Intel driver bugs where some things that were supposed to be synchronized via other means could ONLY be synchronized with a redundant MakeCurrent call every frame.

Upvotes: 2

Related Questions