Reputation: 692
I need to read a binary file written on little-endian OS. An extraction operator<< does not work on binary files. It seems that a simpleminded implementation along the lines of code below works on Mac OS X running on Intel chips. I just wonder how kosher is it. Would I just need to swap bytes on big-endian machines?
#include <istream>
#include <cstdint>
...
std::stream sfile(path, std::ios::binary);
...
uint32_t iValue;
sfile.read(reinterpret_cast<char *>(&iValue), sizeof(uint32_t));
double dValue;
sfile.read(reinterpret_cast<char *>(&dValue), sizeof(double));
Upvotes: 0
Views: 204
Reputation: 234644
Would I just need to swap bytes on big-endian machines?
The machine doesn't matter. C++ integers are numbers, not sequences of bytes. Sequences of bytes, unsurprisingly, have the byte order (aka endianness) property. Numbers don't. Five is five is five is 5 is V is IIIII is 101 is 12.
You want to obtain a number from its representation as a sequence of bytes with the little-endian byte order. C++ has a simple way to do that:
i = (data[0]<<0) | (data[1]<<8) | (data[2]<<16) | (data[3]<<24);
This works on any machine because C++ integers are numbers on any machine.
For floating point numbers, you need to know how they were encoded. The byte order property is not enough to describe that. In most mainstream implementations you can assume that they are encoded as specified in the IEEE754 standard. To read one in those implementations, you can construct an integer from the appropriate byte order and then bitwise copy it into a floating point variable, as follows:
uint32_t i = (data[0]<<0) | (data[1]<<8) | (data[2]<<16) | (data[3]<<24);
float f; // assumes IEEE754 single-precision
std::memcpy(&f, &i, sizeof(i));
Upvotes: 3