Michael Chourdakis
Michael Chourdakis

Reputation: 11178

Nvidia HDR Encoder

I'm trying to use the NVIDIA SDK to encode HDR video with H265. Windows Media Foundation doesn't (yet) support 10-bit input with H265.

I can't seem to feed the colors correctly to the encoder. I'm trying to render a video which has 3 images, one green with a value of 1.0, one with value of 2.0 and one with value of 3.0 (maximum) in the RGB, that is, in D2D1_COLOR_F it's {0,1,0,1}, {0,2,0,1} and {0,3,0,1}.

Only the maximum 3 is seen correctly (The left one is the generated video, the right one is the correct color that should be shown in the video):

enter image description here

With green set to 2.0, this is the result:

enter image description here

And with green set to 1.0, even worse:

enter image description here

And this is the result of a real HDR image:

enter image description here

The Nvidia encoder accepts colors in AR30 format, that is, 10 bits for R,G,B and 2 for Alpha (which is ignored). My DirectX rendered has the colors in GUID_WICPixelFormat128bppPRGBAFloat so I'm doing this:

     struct fourx
     {
        float r, g, b, a;
     };

    float*f = pointer_to_floats;
    for (int x = 0; x < wi; x++)
    {
        for (int y = 0; y < he; y++)
        {
            char* dx = (char*)f;
            dx += y * wi * 16;
            dx += x * 16;
            fourx* col = (fourx*)dx;
            DirectX::XMVECTOR v;
            DirectX::XMVECTORF32 floatingVector = { col->r,col->g,col->b,col->a };
            v = floatingVector;

            // float is 0 to max_lim
            float max_number = 3.0f; // this is got from my monitor's white level as described [here][6].

                DirectX::PackedVector::XMUDECN4 u4 = {};
                col->r *= 1023.0f / max_number;
                col->g *= 1023.0f / max_number;
                col->b *= 1023.0f / max_number;

                u4.z = (int)col->r;
                u4.y = (int)col->g;
                u4.x = (int)col->b;


                u4.w = 0;
                DWORD* dy = output_pointer;
                dy += y * wi;
                dy += x;
                *dy = u4.operator unsigned int();
            }
          }

I suspect something's wrong with the gamma or what. I'm not sure how to proceed from now on.

Upvotes: 0

Views: 106

Answers (0)

Related Questions