Yagg
Yagg

Reputation: 125

16 bit grayscale png

I'm trying to write (using libpng) an 16-bit grayscale image where each point color equals to sum of its coordinates. The following code should produce a 16-bit PNG, but instead produces 8-bit like this. Why?

#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <png.h>

void save_png(FILE* fp, long int size)
{
    png_structp png_ptr = NULL;
    png_infop info_ptr = NULL;
    size_t x, y;
    png_bytepp row_pointers;

    png_ptr = png_create_write_struct(PNG_LIBPNG_VER_STRING, NULL, NULL, NULL);
    if (png_ptr == NULL) {
        return ;
    }

    info_ptr = png_create_info_struct(png_ptr);
    if (info_ptr == NULL) {
        png_destroy_write_struct(&png_ptr, NULL);
        return ;
    }

     if (setjmp(png_jmpbuf(png_ptr))) {
        png_destroy_write_struct(&png_ptr, &info_ptr);
        return ;
    }

    png_set_IHDR(png_ptr, info_ptr,
                 size, size, // width and height
                 16, // bit depth
                 PNG_COLOR_TYPE_GRAY, // color type
                 PNG_INTERLACE_NONE, PNG_COMPRESSION_TYPE_DEFAULT, PNG_FILTER_TYPE_DEFAULT);

    /* Initialize rows of PNG. */
    row_pointers = (png_bytepp)png_malloc(png_ptr,
        size*png_sizeof(png_bytep));

    for (int i=0; i<size; i++)
       row_pointers[i]=NULL;

    for (int i=0; i<size; i++)
       row_pointers[i]=png_malloc(png_ptr, size*2);

    //set row data
    for (y = 0; y < size; ++y) {
        png_bytep row = row_pointers[y];
        for (x = 0; x < size; ++x) {
                short color = x+y;
                *row++ = (png_byte)(color & 0xFF);
                *row++ = (png_byte)(color >> 8);
        }
    }

    /* Actually write the image data. */
    png_init_io(png_ptr, fp);
    png_set_rows(png_ptr, info_ptr, row_pointers);
    png_write_png(png_ptr, info_ptr, PNG_TRANSFORM_IDENTITY, NULL);
    //png_write_image(png_ptr, row_pointers);

    /* Cleanup. */
    for (y = 0; y < size; y++) {
        png_free(png_ptr, row_pointers[y]);
    }
    png_free(png_ptr, row_pointers);
    png_destroy_write_struct(&png_ptr, &info_ptr);
}

int main()
{
  FILE* f;
  if((f=fopen("test.png", "wb"))!=NULL)
  {
    save_png(f, 257);

    fclose(f);
  }
  return 0;
}

Upvotes: 12

Views: 21491

Answers (3)

Adam
Adam

Reputation: 1396

This is expectations versusu reality problem. One creates PNG16 and uses it on the 8-bit color computer, which allows color values to be in the range 0-255 ( 1 byte = 8 bit color depth).

PNG16 uses unsigned 16 bit integers for color codes.

Here any values greater then 2^8 = 256 will simply be clipped to 255, probably using modulo 255 command. I have made similar image, but with horizontal gradient. Image has 1000 pixels and I see 4 repetition of the gradient ( 1000 / 255 = 4).

All elements in the chain from the graphic file thru the application to the monitor must have the same color bits.

I have checked the file with Image Magic HDR16 viewer. It describes

  • file as PNG16, one gray channel with gray values from 0 to 65282 so closed to 2^{16}
  • viewer: 24 bit color so gray is only 2^8 = 256 colors

enter image description here

Upvotes: 0

BareMetalCoder
BareMetalCoder

Reputation: 629

Sorry for resurrecting an old thread, but I got here after googling for how to write 16 bit grayscale images. I ran into similar problems, and I thought it would be helpful to post how I resolved the issue.

TL;DR:

a) The bytes have to be provided to the library MSB first, so it works if you flip the lines above to this:

*row++ = (png_byte)(color >> 8);
*row++ = (png_byte)(color & 0xFF);

b) To actually see a 16 bit value on an 8-bit screen, any values under 256 will simply be clipped to black. Practically speaking, values that are several multiples of 256 should be used to see anything at all. The color = x + y code above probably didn't produce values that were bright enough.

How I got to the conclusions above:

I started with the code above, using only 'x' as the color, not 'x + y'.

The intent was to have a gradient that faded in from black on the left to whatever the max x was on the right.

However, instead of having one long gradient, I was getting several narrow gradients instead. This screamed "WRONG ENDIANNESS!"

I tried inverting the bits, but then I got a black image. Took me a while to clue in, but since the screen only displays in 8 bits, even the maximum value of (in my case) 968 was too dark. This maps to 2 or 3 on an 8 bit screen, and even with high gamma I couldn't see the difference.

Since I knew my max X was roughly 1000, and that the max value for a 16 bit value is 65000 ish, so I used (x * 60) as my color. That ended up producing a visible result.

Thanks for the original post. It was an excellent example to get started.

Upvotes: 6

unwind
unwind

Reputation: 400159

The linked-to image shows as being "16-bit" in Windows 7's "Properties". I guess you're just seeing various applications falling back to converting down to 8-bit for display, which (I guess) is pretty expected since most display devices don't support 16 bits.

Upvotes: 8

Related Questions