dbr
dbr

Reputation: 169663

Unpacking hex-encoded floats

I'm trying to translate the following Python code into C++:

import struct
import binascii


inputstring = ("0000003F" "0000803F" "AD10753F" "00000080")
num_vals = 4

for i in range(num_vals):
    rawhex = inputstring[i*8:(i*8)+8]

    # <f for little endian float
    val = struct.unpack("<f", binascii.unhexlify(rawhex))[0]
    print val

    # Output:
    # 0.5
    # 1.0
    # 0.957285702229
    # -0.0

So it reads 32-bit worth of the hex-encoded string, turns it into a byte-array with the unhexlify method, and interprets it as a little-endian float value.

The following almost works, but the code is kind of crappy (and the last 00000080 parses incorrectly):

#include <sstream>
#include <iostream>


int main()
{
    // The hex-encoded string, and number of values are loaded from a file.
    // The num_vals might be wrong, so some basic error checking is needed.
    std::string inputstring = "0000003F" "0000803F" "AD10753F" "00000080";
    int num_vals = 4;


    std::istringstream ss(inputstring);

    for(unsigned int i = 0; i < num_vals; ++i)
    {
        char rawhex[8];

// The ifdef is wrong. It is not the way to detect endianness (it's
// always defined)
#ifdef BIG_ENDIAN
        rawhex[6] = ss.get();
        rawhex[7] = ss.get();

        rawhex[4] = ss.get();
        rawhex[5] = ss.get();

        rawhex[2] = ss.get();
        rawhex[3] = ss.get();

        rawhex[0] = ss.get();
        rawhex[1] = ss.get();
#else
        rawhex[0] = ss.get();
        rawhex[1] = ss.get();

        rawhex[2] = ss.get();
        rawhex[3] = ss.get();

        rawhex[4] = ss.get();
        rawhex[5] = ss.get();

        rawhex[6] = ss.get();
        rawhex[7] = ss.get();
#endif

        if(ss.good())
        {
            std::stringstream convert;
            convert << std::hex << rawhex;
            int32_t val;
            convert >> val;

            std::cerr << (*(float*)(&val)) << "\n";
        }
        else
        {
            std::ostringstream os;
            os << "Not enough values in LUT data. Found " << i;
            os << ". Expected " << num_vals;
            std::cerr << os.str() << std::endl;
            throw std::exception();
        }
    }
}

(compiles on OS X 10.7/gcc-4.2.1, with a simple g++ blah.cpp)

Particularly, I'd like to get rid of the BIG_ENDIAN macro stuff, as I'm sure there is a nicer way to do this, as this post discusses.

Few other random details - I can't use Boost (too large a dependency for the project). The string will usually contain between 1536 (83*3) and 98304 float values (323*3), at most 786432 (643*3)

(edit2: added another value, 00000080 == -0.0)

Upvotes: 6

Views: 2237

Answers (3)

George Skoptsov
George Skoptsov

Reputation: 3971

I think the whole istringstring business is an overkill. It's much easier to parse this yourself one digit at a time.

First, create a function to convert a hex digit into an integer:

signed char htod(char c)
{
  c = tolower(c);
  if(isdigit(c))
    return c - '0';

  if(c >= 'a' && c <= 'f')
    return c - 'a' + 10;

  return -1;
}

Then simply convert the string into an integer. The code below doesn't check for errors and assumes big endianness -- but you should be able to fill in the details.

unsigned long t = 0;
for(int i = 0; i < s.length(); ++i)
  t |= (t << 4) & htod(s[i]);

Then your float is

float f = * (float *) &t;

Upvotes: 1

dbr
dbr

Reputation: 169663

This is what we ended up with, OpenColorIO/src/core/FileFormatIridasLook.cpp

(Amardeep's answer with the unsigned uint32_t fix would likely work also)

    // convert hex ascii to int
    // return true on success, false on failure
    bool hexasciitoint(char& ival, char character)
    {
        if(character>=48 && character<=57) // [0-9]
        {
            ival = static_cast<char>(character-48);
            return true;
        }
        else if(character>=65 && character<=70) // [A-F]
        {
            ival = static_cast<char>(10+character-65);
            return true;
        }
        else if(character>=97 && character<=102) // [a-f]
        {
            ival = static_cast<char>(10+character-97);
            return true;
        }

        ival = 0;
        return false;
    }

    // convert array of 8 hex ascii to f32
    // The input hexascii is required to be a little-endian representation
    // as used in the iridas file format
    // "AD10753F" -> 0.9572857022285461f on ALL architectures

    bool hexasciitofloat(float& fval, const char * ascii)
    {
        // Convert all ASCII numbers to their numerical representations
        char asciinums[8];
        for(unsigned int i=0; i<8; ++i)
        {
            if(!hexasciitoint(asciinums[i], ascii[i]))
            {
                return false;
            }
        }

        unsigned char * fvalbytes = reinterpret_cast<unsigned char *>(&fval);

#if OCIO_LITTLE_ENDIAN
        // Since incoming values are little endian, and we're on little endian
        // preserve the byte order
        fvalbytes[0] = (unsigned char) (asciinums[1] | (asciinums[0] << 4));
        fvalbytes[1] = (unsigned char) (asciinums[3] | (asciinums[2] << 4));
        fvalbytes[2] = (unsigned char) (asciinums[5] | (asciinums[4] << 4));
        fvalbytes[3] = (unsigned char) (asciinums[7] | (asciinums[6] << 4));
#else
        // Since incoming values are little endian, and we're on big endian
        // flip the byte order
        fvalbytes[3] = (unsigned char) (asciinums[1] | (asciinums[0] << 4));
        fvalbytes[2] = (unsigned char) (asciinums[3] | (asciinums[2] << 4));
        fvalbytes[1] = (unsigned char) (asciinums[5] | (asciinums[4] << 4));
        fvalbytes[0] = (unsigned char) (asciinums[7] | (asciinums[6] << 4));
#endif
        return true;
    }

Upvotes: -1

Amardeep AC9MF
Amardeep AC9MF

Reputation: 19054

The following is your updated code modified to remove the #ifdef BIG_ENDIAN block. It uses a read technique that should be host byte order independent. It does this by reading the hex bytes (which are little endian in your source string) into a big endian string format compatible with the iostream std::hex operator. Once in this format it should not matter what the host byte order is.

Additionally, it fixes a bug in that rawhex needs to be zero terminated to be inserted into convert without trailing garbage in some cases.

I do not have a big endian system to test on, so please verify on your platform. This was compiled and tested under Cygwin.

#include <sstream>
#include <iostream>

int main()
{
    // The hex-encoded string, and number of values are loaded from a file.
    // The num_vals might be wrong, so some basic error checking is needed.
    std::string inputstring = "0000003F0000803FAD10753F00000080";
    int num_vals = 4;
    std::istringstream ss(inputstring);
    size_t const k_DataSize = sizeof(float);
    size_t const k_HexOctetLen = 2;

    for (uint32_t i = 0; i < num_vals; ++i)
    {
        char rawhex[k_DataSize * k_HexOctetLen + 1];

        // read little endian string into memory array
        for (uint32_t j=k_DataSize; (j > 0) && ss.good(); --j)
        {
            ss.read(rawhex + ((j-1) * k_HexOctetLen), k_HexOctetLen);
        }

        // terminate the string (needed for safe conversion)
        rawhex[k_DataSize * k_HexOctetLen] = 0;

        if (ss.good())
        {
            std::stringstream convert;
            convert << std::hex << rawhex;
            uint32_t val;
            convert >> val;

            std::cerr << (*(float*)(&val)) << "\n";
        }
        else
        {
            std::ostringstream os;
            os << "Not enough values in LUT data. Found " << i;
            os << ". Expected " << num_vals;
            std::cerr << os.str() << std::endl;
            throw std::exception();
        }
    }
}

Upvotes: 1

Related Questions