mlomb
mlomb

Reputation: 172

OpenGL - vertex color in shader gets swapped

I'm trying to send colors to the shader but the colors get swapped, I send 0xFF00FFFF (magenta) but I get 0xFFFF00FF (yellow) in the shader.

I think is happening something like this, by experimenting:

Image

My vertex shader:

#version 330 core

layout(location = 0) in vec4 position;
layout(location = 1) in vec3 normal;
layout(location = 2) in vec4 color;

uniform mat4 pr_matrix;
uniform mat4 vw_matrix = mat4(1.0);
uniform mat4 ml_matrix = mat4(1.0);

out DATA
{
    vec4 position;
    vec3 normal;
    vec4 color;
} vs_out;

void main()
{
    gl_Position = pr_matrix * vw_matrix * ml_matrix * position;

    vs_out.position = position;
    vs_out.color = color;
    vs_out.normal = normalize(mat3(ml_matrix) * normal);
}

And the fragment shader:

#version 330 core

layout(location = 0) out vec4 out_color;

in DATA
{
    vec3 position;
    vec3 normal;
    vec4 color;
} fs_in;

void main()
{
    out_color = fs_in.color;

    //out_color = vec4(fs_in.color.y, 0, 0, 1);
    //out_color = vec4((fs_in.normal + 1 / 2.0), 1.0);
}

Here is how I set up the mesh:

struct Vertex_Color {
    Vec3 vertex;
    Vec3 normal;
    GLint color; // GLuint tested
};


std::vector<Vertex_Color> verts = std::vector<Vertex_Color>();

[loops]
    int color = 0xFF00FFFF; // magenta, uint tested
    verts.push_back({ vert, normal, color });


glBufferData(GL_ARRAY_BUFFER, verts.size() * sizeof(Vertex_Color), &verts[0], GL_DYNAMIC_DRAW);

glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex_Color), (const GLvoid*)0);
glEnableVertexAttribArray(0);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex_Color), (const GLvoid*)(offsetof(Vertex_Color, normal)));
glEnableVertexAttribArray(1);
glVertexAttribPointer(2, 4, GL_UNSIGNED_BYTE, GL_TRUE, sizeof(Vertex_Color), (const GLvoid*)(offsetof(Vertex_Color, color)));
glEnableVertexAttribArray(2);

Here are some examples:

Examples

I can't figure it out what's wrong. Thanks in advance.

Upvotes: 0

Views: 398

Answers (1)

derhass
derhass

Reputation: 45322

Your code is reinterpreting an int as 4 consecutive bytes in memory. The internal encoding for int (and all other types) is machine-specific. In your case, you got 32 bit integers stored in little endian byte order, which is kind of the typical case for PC environments.

You could use an array like GLubyte color[4] to explicitely get a defined memory layout.

If you really want to use an integer type, you could send the data as a an integer attribute with glVertexAttribIPointer (note the I there) and use unpackUnorm4x8 om the shader to get a normalized float vector. However, that requires at least GLSL 4.10, and might be less efficient than the standard approach.

Upvotes: 4

Related Questions