orlp
orlp

Reputation: 117741

Best way to render a bitmap/bitarray to 2d plane (with OpenGL)

Ok, this is what I have. I have a 1d bitmap (or bitarray, bitset, bitstring, but I'll call it a bitmap for now) containing the live or dead states from a conway game of life generation. The cell at (x, y) is represented by the bit at y * map_width + x.

Now I have my game of life "engine" working, it would be nice if I could render some graphical stuff now. I thought OpenGL would be a nice choice for this, but I have no idea how I should start and if there are any specific functions or shaders (I know nothing about shaders) that can efficiently render a bitmap into a 2d plane with black 'n white pixels.

If you now think "no you idiot opengl is bad go with ...", feel free to say it, I'm open for changes.

EDIT

I forgot to say that I use a compact bitarray storing 8 bits per byte and using masking to retrieve those bytes. This is my hand made library thingy:

#include <stdint.h> // uint32_t
#include <stdlib.h> // malloc()
#include <string.h> // memset()
#include <limits.h> // CHAR_BIT

typedef uint32_t word_t;
enum {
    WORD_SIZE = sizeof(word_t), // size of one word in bytes
    BITS_PER_WORD = sizeof(word_t) * CHAR_BIT, // size of one word in bits
    MAX_WORD_VALUE = UINT32_MAX // max value of one word
};

typedef struct {
    word_t *words;
    int nwords;
    int nbytes;
} bitmap_t;

inline int WORD_OFFSET(int b) { return b / BITS_PER_WORD; }
inline int BIT_OFFSET(int b) { return b % BITS_PER_WORD; }

inline void setbit(bitmap_t bitmap, int n) { bitmap.words[WORD_OFFSET(n)] |= (1 << BIT_OFFSET(n)); }
inline void flipbit(bitmap_t bitmap, int n) { bitmap.words[WORD_OFFSET(n)] ^= (1 << BIT_OFFSET(n)); }
inline void clearbit(bitmap_t bitmap, int n) { bitmap.words[WORD_OFFSET(n)] &= ~(1 << BIT_OFFSET(n)); }
inline int getbit(bitmap_t bitmap, int n) { return (bitmap.words[WORD_OFFSET(n)] & (1 << BIT_OFFSET(n))) != 0; }

inline void clearall(bitmap_t bitmap) {
    int i;
    for (i = bitmap.nwords - 1; i >= 0; i--) {
        bitmap.words[i] = 0;
    }
}

inline void setall(bitmap_t bitmap) {
    int i;
    for (i = bitmap.nwords - 1; i >= 0; i--) {
        bitmap.words[i] = MAX_WORD_VALUE;
    }
}

bitmap_t bitmap_create(int nbits) {
    bitmap_t bitmap;
    bitmap.nwords = nbits / BITS_PER_WORD + 1;
    bitmap.nbytes = bitmap.nwords * WORD_SIZE;
    bitmap.words = malloc(bitmap.nbytes);

    if (bitmap.words == NULL) { // could not allocate memory
        printf("ERROR: Could not allocate (enough) memory.");
        exit(1);
    }

    clearall(bitmap);
    return bitmap;
}

void bitmap_free(bitmap_t bitmap) {
    free(bitmap.words);
}

Upvotes: 3

Views: 2043

Answers (4)

kvark
kvark

Reputation: 5361

First of all consider doing the simulation itself on GPU by ping-ponging between 2 OpenGL textures. If not some complex optimizations - Conway's Life is a pretty straightforward task for GPU. It will require 2 framebuffer objects and some understanding of shaders.

Edit-1: Example fragment shader (brain-compiled)

#version 130
uniform sampler2D input;
out float life;
void main() {
    ivec2 tc = ivec2(gl_FragCoord);
    float orig = texelFetch(input,tc,0);
    float sum = orig+
        texelFetchOffset(input,tc,0,ivec2(-1,0))+
        texelFetchOffset(input,tc,0,ivec2(+1,0))+
        texelFetchOffset(input,tc,0,ivec2(0,-1))+
        texelFetchOffset(input,tc,0,ivec2(0,+1))+
        texelFetchOffset(input,tc,0,ivec2(-1,-1))+
        texelFetchOffset(input,tc,0,ivec2(-1,+1))+
        texelFetchOffset(input,tc,0,ivec2(+1,-1))+
        texelFetchOffset(input,tc,0,ivec2(+1,+1));
    if(sum<1.9f)
        life = 0.0f;
    else if(sum<2.1f)
        life = orig;
    else life = 1.0f;
}

The vertex shader is a simple pass-through:

#version 130
in vec2 vertex;
void main() {
   gl_Position = vec4(vertex,0.0,1.0);
}

Upvotes: 1

datenwolf
datenwolf

Reputation: 162214

Old versions of OpenGL provide functionality to draw bitmaps directly without the need for a intermediary texture: glBitmap Compared to other methods for drawing images glBitmap is rather slow, but since one uses it only sparingly this is not that bad.

http://www.opengl.org/sdk/docs/man/xhtml/glBitmap.xml

Bitmaps are placed using glRasterPos or glWindowPos.

http://www.opengl.org/sdk/docs/man/xhtml/glRasterPos.xml http://www.opengl.org/sdk/docs/man/xhtml/glWindowPos.xml

Bitmaps have a small pitfall: If the raster position set using glRasterPos or glWindowPos is outside the viewport no part of the bitmap gets drawn, even if it reached into the viewport; see the reference page of glBitmap for a workaround.

Upvotes: 0

Matěj Z&#225;bsk&#253;
Matěj Z&#225;bsk&#253;

Reputation: 17272

This is code from my OGL Game of Life implementation.

This uploads the texture (do this every time you want to update the data):

glTexImage2D( GL_TEXTURE_2D, 0, 1, game->width, game->height, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, game->culture[game->phase] );

game->culture[phase] is the data array of type char* of size width * height (phase toggles between two alternating arrays that are being written into resp. read from).

Because GL_LUMINANCE is used, the colors will be only black and white.

Also, you need to set up the rectagle with this (every frame, but I guess you already know this)

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);


    glBegin(GL_QUADS);                                  // Draw A Quad
        glVertex3f(-1.0f, 1.0f, 0.0f);                  // Top Left
        glTexCoord2i( 1, 0 );
        glVertex3f( 1.0f, 1.0f, 0.0f);                  // Top Right
        glTexCoord2i( 1, 1 );
        glVertex3f( 1.0f,-1.0f, 0.0f);                  // Bottom Right
        glTexCoord2i( 0, 1 );
        glVertex3f(-1.0f,-1.0f, 0.0f);                  // Bottom Left
        glTexCoord2i( 0, 0 );
    glEnd();

Of course you could use buffers and keep the "model" in the GPU memory, but that is not really necessary with only one quad.

Upvotes: 2

Thomas
Thomas

Reputation: 181825

If you stick with OpenGL, the easiest way is to upload your bitmap as a texture, then render a quad mapped with that texture. The uploading bit would look something like this:

glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, width, height, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, data);

This assumes that every cell is a single byte, with value 0 for black and 0xFF for white. Note that, on some OpenGL versions, width and height must be powers of two.

Upvotes: 0

Related Questions