Benjamin
Benjamin

Reputation: 280

How to convert 16 bit hex color to RGB888 values in C++

I have uint16_t color and need to convert it into its RGB equivalent. The hex is set up so the first 5 bits represent red, next 6 for green, and last 5 for blue.

So far I have found something close to a solution but not quite due to truncation.

void hexToRGB(uint16_t hexValue)
{

    int r = ((hexValue >> 11) & 0x1F);  // Extract the 5 R bits
    int g = ((hexValue >> 5) & 0x3F);   // Extract the 6 G bits
    int b = ((hexValue) & 0x1F);        // Extract the 5 B bits

    r = ((r * 255) / 31) - 4;
    g = ((g * 255) / 63) - 2;
    b = ((b * 255) / 31) - 4;

    printf("r: %d, g: %d, b: %d\n",r, g, b);
}

int main()
{
    //50712=0xC618 
    hexToRGB(50712);    
    return 0;
}

The example above yields r: 193, g: 192, b: 193 which should be r: 192, g: 192, b: 192 I have been using this question as reference, but I essentially need a backwards solution to what they are asking.

Upvotes: 3

Views: 9579

Answers (2)

Rudy Velthuis
Rudy Velthuis

Reputation: 28806

What about the following:

unsigned r = (hexValue & 0xF800) >> 8;       // rrrrr... ........ -> rrrrr000
unsigned g = (hexValue & 0x07E0) >> 3;       // .....ggg ggg..... -> gggggg00
unsigned b = (hexValue & 0x1F) << 3;         // ............bbbbb -> bbbbb000

printf("r: %d, g: %d, b: %d\n", r, g, b);

That should result in 0xC618 --> 192, 192, 192, but 0xFFFF --> 248, 252, 248, i.e. not pure white.

If you want 0xFFFF to be pure white, you'll have to scale, so

unsigned r = (hexValue & 0xF800) >> 11;
unsigned g = (hexValue & 0x07E0) >> 5;
unsigned b = hexValue & 0x001F;

r = (r * 255) / 31;
g = (g * 255) / 63;
b = (b * 255) / 31;

Then 0xC618 --> 197, 194, 197, instead of the expected 192, 192, 192, but 0xFFFF is pure white and 0x0000 is pure black.

Upvotes: 8

donkopotamus
donkopotamus

Reputation: 23176

There are no "correct" ways to convert from the RGB565 scale to RGB888. Each colour component needs to be scaled from its 5-bit or 6-bit range to an 8-bit range and there are varying ways to do this each often producing different types of visual artifact in an image.

When scaling a colour in the n-bit range we might decide we want the following to be generally true:

  • that absolute black (eg 00000 in 5-bit space) must map to absolute black in 8-bit space;
  • that absolute white (eg 11111 in 5-bit space) must map to absolute white in 8-bit space;

Achieving this means we basically wish to scale the value from (2n - 1) shades in n-bit space into (28 - 1) shades in 8-bit space. That is, we want to effectively do the following in some way:

r_8 = (255 * r / 31)
g_8 = (255 * g / 63)
b_8 = (255 * b / 31)

Different approaches often taken are:

  • scale using integer division
  • scale using floating division and then round
  • bitshift into 8-bit space and add the most significant bits

The latter approach is effectively the following

r_8 = (r << 3) | (r >> 2)
g_8 = (g << 2) | (g >> 4)
b_8 = (b << 3) | (b >> 2)

For your 5-bit value 11000 these would result in 8-bit values of:

  • 197
  • 197
  • 198 (11000000 | 110)

Similarly your six bit value 110000 would result in 8-bit values of:

  • 194
  • 194
  • 195 (11000000 | 11)

Upvotes: 6

Related Questions