larsmoa
larsmoa

Reputation: 12932

Tiled rendering of viewports bigger than GL_MAX_VIEWPORT_DIMS

I'm creating a class that takes an OpenGL scenegraph and uses QGLFrameBufferObject to render the result. To support (virtually) infinite sizes I'm using tiling to extract many small images that can be combined into a big image after rendering all tiles.

I do tiling by setting up a viewport (glViewport) for the entire image and then using glScissor to "cut out" tile after tile. This works fine for resolutions up to GL_MAX_VIEWPORT_DIMS, but will result in empty tiles outside this limit.

How should I approach this problem? Do I need to alter the camera or is there any neat tricks to do this? I'm using Coin/OpenInventor so any tips specific to these frameworks are very welcome too.

Upvotes: 3

Views: 1151

Answers (3)

genpfault
genpfault

Reputation: 52084

Give the OpenGL Tile Rendering Library a try.

Upvotes: 1

Hannesh
Hannesh

Reputation: 7488

Changing the camera isn't as hard as you may think, and it's the only solution I can see at all apart from modifying vertex shaders.

By scaling and translating the projection matrix along the x and y axes, you can easily get any subregion of the normal camera's view.

For a given max and min of the viewport, where the full viewport is (-1, -1) and (1, 1), translate by (max + min) / 2, and scale by (max - min) / 2.

Upvotes: 2

Sean
Sean

Reputation: 254

You could try scaling the entire world down, indirectly making the viewport max account for larger detail. Or basically, you could scale the image AND the viewport down and have the same visual effect.

Upvotes: 0

Related Questions