Jacob Cerón
Jacob Cerón

Reputation: 25

Memory barrier problems for writing and reading an image OpenGL

i'm having a problem trying to reading an image from a fragment shader, first i write into the image in shader porgram A (im just painting blue on the image) then i'm reading from another shader program B to display the image, but the reading part is not getting the right color i'm getting a black image

Unexpected result

This is my application code:

void GLAPIENTRY MessageCallback(GLenum source, GLenum type, GLuint id, GLenum severity, GLsizei length, const GLchar* message, const void* userParam)
{
    std::cout << "GL CALLBACK: type = " << std::hex << type << ", severity = " << std::hex << severity << ", message = " << message << "\n"
    << (type == GL_DEBUG_TYPE_ERROR ? "** GL ERROR **" : "") << std::endl;
}

class ImgRW
    : public Core
{
public:
    ImgRW()
        : Core(512, 512, "JFAD")
    {}

virtual void Start() override
{
    glEnable(GL_DEBUG_OUTPUT);
    glDebugMessageCallback(MessageCallback, nullptr);

    shader_w = new Shader("w_img.vert", "w_img.frag");
    shader_r = new Shader("r_img.vert", "r_img.frag");

    glGenTextures(1, &space);
    glBindTexture(GL_TEXTURE_2D, space);
    glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA32F, 512, 512);
    glBindImageTexture(0, space, 0, GL_FALSE, 0, GL_READ_WRITE, GL_RGBA32F);

    glGenVertexArrays(1, &vertex_array);
    glBindVertexArray(vertex_array);
}

virtual void Update() override
{
    shader_w->use(); // writing shader
    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

    glMemoryBarrier(GL_TEXTURE_FETCH_BARRIER_BIT | GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);

    shader_r->use(); // reading shader
    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}

virtual void End() override
{
    delete shader_w;
    delete shader_r;

    glDeleteTextures(1, &space);

    glDeleteVertexArrays(1, &vertex_array);
}

private:
    Shader* shader_w;
    Shader* shader_r;

GLuint vertex_array;

GLuint space;
};

#if 1
CORE_MAIN(ImgRW)
#endif

and these are my fragment shaders:

Writing to image Code glsl:

#version 430 core

layout (binding = 0, rgba32f) uniform image2D img;

out vec4 out_color;

void main()
{
    imageStore(img, ivec2(gl_FragCoord.xy), vec4(0.0f, 0.0f, 1.0f, 1.0f));
}

Reading from image Code glsl:

#version 430 core

layout (binding = 0, rgba32f) uniform image2D img;

out vec4 out_color;

void main()
{
    vec4 color = imageLoad(img, ivec2(gl_FragCoord.xy));
    out_color = color;
}

The only way that i get the correct result is if i change the order of the drawing commands and i dont need the memory barriers, like this (in the Update fuction of above):

shader_r->use(); // reading shader
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

shader_w->use(); // writing shader
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

I don't know if the problem is the graphics card or the drivers or if i'm missing some kind of flag that enables memoryBarriers or if i put the wrong barrier bits or if i placed the barriers in the code in the wrong part

The Vertex shader for both shader programs is the next:

#version 430 core

void main()
{
    vec2 v[4] = vec2[4]
    (
        vec2(-1.0, -1.0),
        vec2( 1.0, -1.0),
        vec2(-1.0,  1.0),
        vec2( 1.0,  1.0)
    );

    vec4 p = vec4(v[gl_VertexID], 0.0, 1.0);
    gl_Position = p;
}

and in my init function is:

void Window::init()
{
    glfwInit();
    window = glfwCreateWindow(getWidth(), getHeight(), name, nullptr, nullptr);
    glfwMakeContextCurrent(window);
    glfwSetFramebufferSizeCallback(window, framebufferSizeCallback);
    glfwSetCursorPosCallback(window, cursorPosCallback);

    //glfwSetInputMode(window, GLFW_CURSOR, GLFW_CURSOR_DISABLED);

    assert(gladLoadGLLoader((GLADloadproc)glfwGetProcAddress) && "Couldn't initilaize OpenGL");

    glEnable(GL_DEPTH_TEST);
}

and in my function run i'm calling my start, update and end functions

void Core::Run()
{
    std::cout << glGetString(GL_VERSION) << std::endl;

    Start();

    float lastFrame{ 0.0f };

    while (!window.close())
    {
        float currentFrame = static_cast<float>(glfwGetTime());
        Time::deltaTime = currentFrame - lastFrame;
        lastFrame = currentFrame;

        glViewport(0, 0, getWidth(), getHeight());
        glClearBufferfv(GL_COLOR, 0, &color[0]);
        glClearBufferfi(GL_DEPTH_STENCIL, 0, 1.0f, 0);

        Update();

        glfwSwapBuffers(window);
        glfwPollEvents();
    }

    End();
}

Upvotes: 2

Views: 801

Answers (1)

Nicol Bolas
Nicol Bolas

Reputation: 474546

glEnable(GL_DEPTH_TEST);

As I suspected.

Just because a fragment shader doesn't write a color output doesn't mean that those fragments will not affect the depth buffer. If the fragment passes the depth test and the depth write mask is on (assuming no other state is involved), it will update the depth buffer with the current fragment's depth (and the color buffer with uninitialized values, but that's a different matter).

Since you're drawing the same geometry both times, the second rendering's fragments will get the same depth values as the corresponding fragments from the first rendering. But the default depth function is GL_LESS. Since any value is not less than itself, this means that all fragments from the second rendering fail the depth test.

And therefore, they don't get rendered.

So just turn off the depth test. And while you're at it, turn off color writes for your "writing" rendering pass, since you're not writing to the color buffers.


Now, you do properly need the memory barrier between the two draw calls. But you only need the GL_SHADER_IMAGE_ACCESS_BARRIER_BIT, since that's how you're reading the data (via image load/store, not samplers).

Upvotes: 2

Related Questions