Reputation: 55
Conceptually, I'm having a hard time understanding how a 32-bit unsigned integer (which is 4 bytes) can be represented as 8 bytes, the first four of which are encoded using the little-endian format and the last four of which are encoded using the big-endian format.
I'm specifically referring to the ISO 9660 format which encodes some 16-bit and 32-bit integers in this fashion.
I tried the following but this obviously does not work because the BitConverter.ToUInt32()
method only takes the first four bytes from the starting index.
byte[] leastSignificant = reader.ReadBytes(4, Endianness.Little);
byte[] mostSignificant = reader.ReadBytes(4, Endianness.Big);
byte[] buffer = new byte[8];
Array.Copy(leastSignificant, 0, buffer, 0, 4);
Array.Copy(mostSignificant, 0, buffer, 4, 4);
uint actualValue = BitConverter.ToUInt32(buffer, 0);
What is the proper way to read a 32-bit unsigned integer represented as 8 bytes encoded in both-endian format?
Upvotes: 1
Views: 268
Reputation: 941217
This is very typical for an ISO standard. The organization is not very good at creating decent standards, only good at creating compromises among its members. Two basic ways they do that, they either pick a sucky standard that makes everybody equally unhappy. Or pick more than one so that everybody can be happy. Encoding a number twice falls in the latter category.
There's some justification for doing it this way. Optical disks have lots of bits that are very cheap to duplicate. Their formats are often designed to keep the playback hardware as cheap as possible. Mastering a disk is often very convoluted because of that, the BlueRay standard is particularly painful.
Since your machine is little-endian, you only care about the little-endian value. Simply ignore the big-endian equivalent. Technically you could add a check that they are the same but that's just wasted effort.
Upvotes: 1