RectangleEquals
RectangleEquals

Reputation: 1825

Composing a tile's texture coordinates using GLSL

Preface

Consider the following example image:

Tilesheet Example


Note the following:


Question

Given only the above info and a valid tile index, how could we use a vertex shader to construct the proper texture coordinates to use for a desired tile, which would then be passed to the fragment shader for sampling? Luckily, we can make use of gl_VertexID to identify which 'corner' of the image we need... But since we are working with normalized textels (instead of just pixels), we'll need some way of scaling things down between 0 and 1, which further complicates the algorithm.

Here's what I have so far, though it only seems to display a solid color from the image

#version 330 core

layout(location=0) in vec3 in_pos;
out vec2 out_texCoord;

void main()
{
    // These will eventually be uniforms/attributes, and are
    //  only used here for more immediate debugging purposes
    int tileNum = 1;        // Desired tile index
    int tileCount = 28;     // Maximum # of tiles
    float tileWidth = 32;   // Width of each tile in pixels
    float tileHeight = 32;  // Height of each tile in pixels
    float imgWidth = 256;   // Overall width of image in pixels
    float imgHeight = 128;  // Overall height of image in pixels

    // Attempt to calculate the correct texture coordinates
    //  for the desired tile, given only the above info...
    int tileIndex = tileNum % tileCount;
    int columnCount = int(imgWidth / tileWidth);
    int rowCount = int(imgHeight / tileHeight);
    int tileX = tileIndex % columnCount;
    int tileY = int(float(tileIndex) / float(columnCount));

    float startX = float(tileX) * tileWidth;
    float startY = float(tileY) * tileHeight;
    float endX = 1.0f / (startX + tileWidth);
    float endY = 1.0f / (startY + tileHeight);
    startX = 1.0f / startX;
    startY = 1.0f / startY;

    // Check which corner of the image we are working with
    int vid = gl_VertexID % 4;
    switch(vid) {
        case 0:
            out_texCoord = vec2(startX, endY);
            break;
        case 1:
            out_texCoord = vec2(endX, endY);
            break;
        case 2:
            out_texCoord = vec2(startX, startY);
            break;
        case 3:
            out_texCoord = vec2(endX, startY);
            break;
    }
    gl_Position = vec4(in_pos, 1);
}



Disclaimer

Before anyone comes barging in talking about a lack of info/code, please note that using the following code does indeed properly display the entire image, as expected...
Effectively, this means that there is something wrong with the actual texture coordinate calculations,
and not with my application's OpenGL implementation:

int vid = gl_VertexID % 4;
switch(vid) {
    case 0:
        out_texCoord = vec2(0, 1);
        break;
    case 1:
        out_texCoord = vec2(1, 1);
        break;
    case 2:
        out_texCoord = vec2(0, 0);
        break;
    case 3:
        out_texCoord = vec2(1, 0);
        break;
}

Upvotes: 1

Views: 767

Answers (1)

Nico Schertler
Nico Schertler

Reputation: 32597

Inverting a texture coordinate seems to be a shot in the dark. You just need to scale it down:

float endX = (startX + tileWidth) / imgWidth;
float endY = (startY + tileHeight) / imgHeight;
startX = startX / imgWidth;
startY = startY / imgHeight;

Upvotes: 2

Related Questions