Reputation: 1534
I use the following code to write out an integer on my Windows machine:
std::ofstream f("data.bin", ios::out | ios_base::binary);
int nI = 65450;
f.write((char*)&nI, sizeof(int));
f.close();
I look at data.bin in a hex editor and see it is written in the same order that the memory is laid out for integer (AA FF 00 00)
I am then using the following to read it in on a Linux machine and on a Solaris Sparc 10 machine:
std::ifstream f2("data.bin", ios::in | ios_base::binary);
int nI2 = 0;
f2.read((char*)&nI2, sizeof(int));
cout << "Read integer " << nI2 << std::endl;
f2.close();
As both Linux and Solaris are Big Endian, I expected the result to be AAFF, i.e. 43775. However, I get varying results. On Linux, the integer output is 65450, but on Solaris it is -1426128896.
As it turns out, -1426128896 is FFFFFFFFAAFF0000, but that is as far I can begin to understand this.
So, can someone please explain why I am getting the results that I am? Thanks in advance.
Upvotes: 1
Views: 566
Reputation: 254501
both Linux and Solaris are Big Endian
Endianness is a property of the processor, not the operating system. Both Linux and Solaris are little-endian on a little-endian architecture (such as Intel x86/x64) and big-endian on a big-endian architecture (such as Sparc).
So, if your Linux platform is little-endian, you'd expect to get back the same result written by Windows - as you do.
I expected the result to be AAFF
Looking at the four bytes that were written to the file, you'd expect a big-endian interpretation of them to give 0xaaff0000
, not 0xaaff
, which (reinterpreted as a 32-bit signed value) gives the large negative value you see.
(That's assuming that Solaris for Sparc also uses 32-bit int
values. The size of int
is something else that can vary from machine to machine.)
Upvotes: 3