Aurélien Pierre
Aurélien Pierre

Reputation: 703

Integer type to display 10-bits RGB for display

I have a lot of trouble to find any developer documentation on how to implement in C a 10 bits RGB output for displays, mainly for Xorg/Wayland on Linux and compatible with Windows, if possible.

Currently, the application I'm working on (darktable) is using uint8_t to output RGB values. What would be the type for 10 bits uint? Is there any way to check for 10 bits support of the GPU/codec from the code?

Upvotes: 1

Views: 1298

Answers (2)

datenwolf
datenwolf

Reputation: 162164

The exact way how the color channels are arranged depends on the API. It may very well be planar (i.e. one mono image per channel) it may be packed (i.e. packing several channels into a single word of data), it may be interleaved (using different representations for each channel).

However one thing is for sure: For any channel format that doesn't exactly fit a "native" type, some bit twiddling will have to happen to access it.

To get an idea of how vast that field is, just look at the image formats specified by the very first version of the Vulkan API: https://vulkan.lunarg.com/doc/view/1.0.30.0/linux/vkspec.chunked/ch31s03.html – that document also describes, how exactly the bits are arranged for each format.

Upvotes: 1

Scheff's Cat
Scheff's Cat

Reputation: 20141

I have googled a bit to clarify what 10 bits RGB could mean.

On Wikipedia Color Depth – Deep color (30/36/48-bit) I found:

Some earlier systems placed three 10-bit channels in a 32-bit word, with 2 bits unused (or used as a 4-level alpha channel).

which seemed me the most reasonable.

Going with this, there are 10 bits for Red, 10 bits for Green, and 10 bits for Blue, + 2 bits unused (or reserved for Alpha).

This leaves two questions open:

  1. Is it stored RGBa or BGRa or aRGB? (I believe that I've seen all these variations in the past.)

  2. Has the composed value to be stored Little-Endian or Big-Endian?

When this hit me in practical work, I made an implementation based on an assumption, rendered some test pattern, checked whether it looks as expected and if not swapped the resp. parts in the implementation. Nothing, I'm proud of but, IMHO, I got the expected results with least effort.

So, assuming I've a color stored as RGB triple with component values in range [0, 1], the following function converts it to aRGB:

uint32_t makeRGB30(float r, float g, float b)
{
  const uint32_t mask = (1u << 10u) - 1u;
  /* convert float -> uint */
  uint32_t rU = r * mask, gU = g * mask, bU = b * mask;
  /* combine and return color components */
  return ((rU & mask) << 20) | ((gU & mask) << 10) | (bU & mask);
}

This results in values with the following bit layout:

aaRRRRRR.RRRRGGGG.GGGGGGBB.BBBBBBBB

A small sample for demo:

#include <stdint.h>
#include <stdio.h>

uint32_t makeRGB30(float r, float g, float b)
{
  const uint32_t mask = (1u << 10u) - 1u;
  /* convert float -> uint */
  uint32_t rU = r * mask, gU = g * mask, bU = b * mask;
  /* combine and return color components */
  return ((rU & mask) << 20) | ((gU & mask) << 10) | (bU & mask);
}

int main(void)
{
  /* samples */
  const float colors[][3] = {
    { 0.0f, 0.0f, 0.0f }, /* black */
    { 1.0f, 0.0f, 0.0f }, /* red */
    { 0.0f, 1.0f, 0.0f }, /* green */
    { 0.0f, 0.0f, 1.0f }, /* blue */
    { 1.0f, 1.0f, 0.0f }, /* yellow */
    { 1.0f, 0.0f, 1.0f }, /* magenta */
    { 0.0f, 1.0f, 1.0f }, /* cyan */
    { 1.0f, 1.0f, 1.0f } /* white */
  };
  const size_t n = sizeof colors / sizeof *colors;
  for (size_t i = 0; i < n; ++i) {
    float *color = colors[i];
    uint32_t rgb = makeRGB30(color[0], color[1], color[2]);
    printf("(%f, %f, %f): %08x\n", color[0], color[1], color[2], rgb);
  }
  /* done */
  return 0;
}

Output:

(0.000000, 0.000000, 0.000000): 00000000
(1.000000, 0.000000, 0.000000): 3ff00000
(0.000000, 1.000000, 0.000000): 000ffc00
(0.000000, 0.000000, 1.000000): 000003ff
(1.000000, 1.000000, 0.000000): 3ffffc00
(1.000000, 0.000000, 1.000000): 3ff003ff
(0.000000, 1.000000, 1.000000): 000fffff
(1.000000, 1.000000, 1.000000): 3fffffff

Live Demo on ideone

Upvotes: 2

Related Questions