daniel
daniel

Reputation: 26

How to use WGLARB in LWJGL 3?

Previously in my game engine written in java, I have used lwjgl 2.9.3. I ran into an issue. I wanted lwjgl to run on another graphics card. So I researched and soon figured out, that it would be impossible using lwjgl 2.9.3. So I read about seeing if lwjgl's developers were going to add the option of choosing which graphics card you get to run on. I figured out that lwjgl 3 did have support for this. So I switched over to lwjgl 3.
I got everything working and started to work on why I switched. I soon figured out that there is no documentation for changing which graphics card you use. So after hours of testing I figured out that you need to create a WGLARB context.
I know you can just use the method: wglCreateContextAttribsARB(long hdc, long sharedContext, Byte/IntBuffer attribList). But it is unclear how to use this method. I am not sure if you use the windows DC Pointer or something else. I know you do not have to put anything in for the 'long sharedContext' so I put 'MemoruUtil.NULL'. So here are my questions:

  1. How do you create a WGLARB context, or more specificly what do you specifically pass in, like do you put in an empty Int/Byte Buffer or do you put data into the buffer before you pass it in as a parameter. I do not know.

  2. From there how do you use: WGLNVGPUAffinity.getInstance().wglCreateAffinityDCNV(gpuList); GPU list is either a PointBuffer or ByteBuffer how do I get this data to pass in in the first place?

Sorry about the long post, but I am very frustrated about the lack of documentation, Thank you in advance!

Upvotes: 1

Views: 182

Answers (1)

Nicol Bolas
Nicol Bolas

Reputation: 473407

LWJGL 3 (through GLFW) handles OpenGL context creation. Which means you don't get to handle it. GLFW will create the context, though it does give you some options as to exactly how. But GLFW is cross-platform, so it doesn't give you Windows & NVIDIA-specific options.

However, that doesn't mean you don't get some control. LWJGL 3 also gives you access to the platform-specific APIs (GLX, WGL, etc), as well as extensions to them. So you can use WGL.wglGetCurrentContext to get the context, then use the WGLNVGPUAffinity functions to create affinity contexts. That extension provides all of the various tools for enumerating GPUs and creating new OpenGL contexts that target a specific GPU.

The extension specification is even nice enough to provide sample code for enumerating GPUs. Of course, it's NVIDIA only, but that clearly isn't a problem for you.

Upvotes: 2

Related Questions