Sir Wellington
Sir Wellington

Reputation: 596

How to Convert a 12-bit Image to 8-bit in C/C++?

All right, so I have been very frustrated trying to convert a 12-bit buffer to an 8-bit one. The image source is a 12-bit GrayScale (decompressed from JPEG2000) whose color range goes from 0-4095. Now I have to reduce that to 0-255. Common sense tells me that I should simply divide each pixel value like this. But when I try this, the image comes out too light.

void 
TwelveToEightBit(
    unsigned char * charArray,
    unsigned char * shortArray,
    const int num )
{

    short shortValue  = 0; //Will contain the two bytes in the shortArray.
    double doubleValue  = 0; //Will contain intermediary calculations.

    for( int i = 0, j =0; i < num; i++, j +=2 )
    {
        // Bitwise manipulations to fit two chars onto one short.
        shortValue = (shortArray[j]<<8);
        shortValue += (shortArray[j+1]);

        charArray[i] = (( unsigned char)(shortValue/16));
    }
}

Now I can tell that there needs to be some contrast adjustments. Any ideas anyone?

Many Thanks in advance

Upvotes: 4

Views: 16315

Answers (5)

Pierre
Pierre

Reputation: 1174

Like this:

// Image is stored in 'data'
unsigned short* I = (unsigned short*)data;

for(int i=0; i<imageSize; i++) {
// 'color' is the 8-bit value  
   char color = (char)((double)(255*I[i])/(double)(1<<12));
   /*...*/ 
}

Upvotes: 1

Jim
Jim

Reputation: 829

if you just want to drop the bottom 4 least significant bits you can do the following:

unsigned int start_value = SOMEVALUE; // starting value
value = (value & 0xFF0 );             // drop bits 
unsigned char final_value =(uint8_t)value >> 4; //bit shift to 8 bits

Note the "unsigned". You don't want the signed bit mucking with your values.

Upvotes: 1

Arun
Arun

Reputation: 20383

The main problem, as I understand, is to convert a 12-bit value to a 8-bit one.

Range of 12-bit value = 0 - 4095 (4096 values)
Range of  8-bit value = 0 -  255 ( 256 values)

I would try to convert a 12-bit value x to a 8-bit value y

  1. First, scale down first to the range 0-1, and
  2. Then, scale up to the range 0-256.

Some C-ish code:

uint16_t x = some_value; 
uint8_t  y = (uint8_t) ((double) x/4096 ) * 256;

Update

Thanks to Kriss's comment, I realized that I disregarded the speed issue. The above solution, due to floating operations, might be slower than pure integer operations.

Then I started considering another solution. How about constructing y with the 8 most significant bits of x? In other words, by trimming off the 4 least significant bits.

y = x >> 4;

Will this work?

Upvotes: 1

Sir Wellington
Sir Wellington

Reputation: 596

In actuality, it was merely some simple Contrast adjustments that needed to be made. I realized this as soon as I loaded up the result image in Photoshop and did auto-contrast....the image result would very closely resemble the expected output image. I found out an algorithm that does the contrast and will post it here for other's convenience:

#include <math.h>

 short shortValue  = 0; //Will contain the two bytes in the shortBuffer.
 double doubleValue  = 0; //Will contain intermediary calculations.

 //Contrast adjustment necessary when converting
 //setting 50 as the contrast seems to be real sweetspot.
 double contrast = pow( ((100.0f + 50.0f) / 100.0f), 2); 

 for ( int i = 0, j =0; i < num; i++, j += 2 )
 {

  //Bitwise manipulations to fit two chars onto one short.
  shortValue = (shortBuffer[j]<<8);
  shortValue += (shortBuffer[j+1]);

  doubleValue = (double)shortValue;

  //Divide by 16 to bring down to 0-255 from 0-4095 (12 to 8 bits)
  doubleValue /= 16;

  //Flatten it out from 0-1
  doubleValue /= 255;
  //Center pixel values at 0, so that the range is -0.5 to 0.5
  doubleValue -= 0.5f;
  //Multiply and just by the contrast ratio, this distances the color
  //distributing right at the center....see histogram for further details
  doubleValue *= contrast;

  //change back to a 0-1 range
  doubleValue += 0.5f;
  //and back to 0-255
  doubleValue *= 255;


  //If the pixel values clip a little, equalize them.
  if (doubleValue >255)
   doubleValue = 255;
  else if (doubleValue<0)
   doubleValue = 0;

  //Finally, put back into the char buffer.
  charBuffer[i] = (( unsigned char)(doubleValue));


 }

Upvotes: 3

Cheers and hth. - Alf
Cheers and hth. - Alf

Reputation: 145279

Wild guess: your code assumes a big-endian machine (most significant byte first). A Windows PC is little-endian. So perhaps try

  shortValue = (shortArray[j+1]<<8);
  shortValue += (shortArray[j]);

If indeed endiasness is the problem then the code you presented would just shave off the 4 most significant bits of every value, and expand the rest to the intensity range. Hm, EDIT, 2 secs later: no, that was a thinko. But try it anyway?

Cheers & hth.,

– Alf

Upvotes: -1

Related Questions