user3312130
user3312130

Reputation: 178

LibGDX - How do I render two textures with different scales in a single shader?

I'm trying to scale a texture produced in an FBO with a different scale than that of another texture and render both textures at the same time in a fragment shader so that I can blend them together.

I followed this tutorial to produce a lightmap for my game and I got a lightmap that works perfectly, but I want to transform the lightmap in the shader so that parallaxed objects in the background receive a proportionally scaled lightmap, so that their "3D" distance to a light source doesn't seem to affect their lighting, but their 2D distance does.

My best guess as to how to do this would be to scale the lightmap in place based on a parallaxed object's Z distance. Z distance in my game is directly proportional to an object's 2D distance to the camera.

Here are examples depicting what happens with the regular lightmap and what the desired effect would be. The lightmap here consists of a yellow radial gradient that is positioned at the yellow non-parallaxed "sun" off to the right:

Description of lightmap scaling

Animated example of current lighting

Animated example of intended lighting

Normally in my game, I scale parallax objects by zooming the camera attached to the SpriteBatch through something like this in LibGDX:

Array<Something> objects = new Array<Something>();
SpriteBatch batch = new SpriteBatch();
OrthographicCamera camera = new OrthographicCamera(screenWidth, screenHeight);

//{start loop, add objects, run logic, etc.}
...

for(int i=0;i<objects.size;i++){
   camera.zoom = objects.get(i).z;
   camera.update();
   batch.setProjectionMatrix(camera.combined);
   objects.get(i).sprite.draw(batch);
}

From what I've researched, there should be a way to do these kinds of transformations in the vertex shader. Doing such a transformation is straightforward through the default LibGDX vertex shader, which is set up to multiply the vertex position with the camera's transformation matrix like so:

void main() {
    v_color = a_color;
    v_texCoords = a_texCoord0; 

    gl_Position =  u_projTrans * a_position; 
}

But I'm not sure how I could scale only one of the two textures used in the fragment shader when the transformation is tied to the shared vertex position. For reference, here is the fragment shader as implemented from the tutorial:

void main() {
    vec4 diffuseColor = texture2D(u_texture, v_texCoords);

    vec2 lightCoord = (gl_FragCoord.xy / resolution.xy);
    vec4 light = texture2D(u_lightmap, lightCoord);

    vec3 ambient = ambientColor.rgb * ambientColor.a;
    vec3 intensity = ambient + light.rgb;
    vec3 finalColor = diffuseColor.rgb * intensity;

    gl_FragColor = v_color * vec4(finalColor, diffuseColor.a);
}

I think that it has to do with transforming the texture coordinates instead of the position and passing them to the lightmap instead of using the screen resolution to obtain the texture coordinates, but my attempts at doing so end up as a big mess because I'm not very knowledgeable of or experienced with OpenGL ES 2 or GLSL.

Is there a good method for scaling one texture in a two texture fragment shader differently than the other one? Or is there a better way to achieve what I'm trying to do with the lightmap?

Edit:

So I can accomplish what I'm trying to do by recreating the FBO lightmap at a different scale for each parallaxed object, but this creates huge performance problems for one of my intended platforms (Android).

I found some links that show that what I'm trying to do is possible, but they both describe passing in two different texture coordinates to the vertex shader, which doesn't seem to work with SpriteBatch. Is it possible to transform texture coordinates in the vertex shader so that I can pass transformed coordinates for the lightmap to the fragment shader? Or am I misunderstanding something about how texture coordinates work?

Upvotes: 3

Views: 1543

Answers (1)

Tenfour04
Tenfour04

Reputation: 93779

First of all, just in case I'm right in assuming that Something represent a single sprite: Every time you call batch.setProjectionMatrix(), the sprite batch is flushed, resulting in another draw call, which will be too many if you have more than a few dozen sprites. With your method of sending your array of objects to the sprite batch, you should have them sorted by z position, and then only call batch.setProjectionMatrix() if the z has changed since the last call of batch.draw(). I personally would keep a separate array of objects for each parallax layer to keep it simple and not worry about sorting.

Proceding from there: First, when drawing your lightmap, do it for the smallest zoom your game will be supporting, so you won't ever have to scale your lightmap down and risk showing the clipped edges of your lightmap. So after fbo.begin(), make sure your cam.zoom is changed to your minimum cam.zoom for the scene.

To get a scaled lightmap coordinate, you need to calculate it in your vertex shader and pass it down to the fragment shader. First of all, you need the zoom level for your input, so you need to pass that in as a uniform. You can do that like this:

//including the changes explained way above, where objectLayers is an 
//Array<LayerWrapper>, where LayerWrapper contains a zoom value and 
//an Array<Something>
for(int i=0;i<objectLayers.size;i++){
    LayerWrapper layer = objectLayers.get(i);
    Array<Something> layerObjects = layer.objects;

    camera.zoom = layer.zoom;
    camera.update();
    batch.setProjectionMatrix(camera.combined);

    //update the zoom level in the shader you're using with the sprite batch
    customShader.bind();
    customShader.setUniformf("u_zoom", layer.zoom);

    for(Something object : layerObjects ){
        object.sprite.draw(batch);
    }
}

Then update your vertex shader to declare uniform float u_zoom; and varying vec2 v_lightCoords;. Your light coordinates will be scaled up based on zoom. And as long as we're passing a varying, we should also pre-calculate the exact texture coordinates, so you don't have to do it in the fragment shader, which results in a dependent texture read (an optimization that could have been done in the tutorial to begin with).

The vertex shader now looks like this:

void main()
{
    v_color = a_color;
    v_texCoords = a_texCoord0;
    vec4 screenSpacePosition = u_projTrans * a_position;
    gl_Position =  screenSpacePosition;

    v_lightCoords = (screenSpacePosition.xy * u_zoom) * 0.5 + 0.5;
}

Since v_lightCoordsalready has texture coordinates built in, you can simplify your fragment shader to do your texture lookup like this: vec4 Light = texture2D(u_lightmap, v_lightCoords); and remove uniform vec2 resolution; which is no longer needed.

Upvotes: 2

Related Questions