Reputation: 553
I would like to write hex 32 bits numbers to binary file and than read some of them using reinterpret_cast and read e.g. 16 bits number. I am reading only 16 bits because it determines a size of packet. In the code there is an example. Maybe the problem is in big or little endian?
#include <iostream> // std::cout
#include <fstream> // std::ifstream
#include <cstdint>
#include <vector>
void saveTestData(void)
{
std::vector<std::uint_fast32_t> tab
{
0x50e5495c, 0xe7b50200, 0xbe6b2248, 0x08004510,
0x015c2340, 0x0000ff11, 0x1567c0a8, 0x004cc0a8,
0x003de290, 0xc35a0148, 0x00000000, 0x01200003,
0x00620000, 0x01140002, 0x00010000, 0x8000ef40,
0x22560003, 0xe0042150, 0x00006bbf, 0xd67c800f,
0x5b5b0003, 0xe0032150, 0x00006bbf, 0xd67c8007,
0x1b5d0003, 0xe0022150, 0x00006bbf, 0xd67c800a,
0xab5d0023, 0xe0052150, 0x00006bbf, 0xd67c8011,
0x8b5c6bbf, 0xd67c8c55, 0xaf896bbf, 0xd67c8c90,
0x4f896bbf, 0xd67c8cd4, 0xef8a6bbf, 0xd67c8d0d,
0x1f8a6bbf, 0xd67c8d43, 0x7f886bbf, 0xd67c8d8f,
0x8f896bbf, 0xd67c8dc4, 0xcf886bbf, 0xd67c8e19,
0x6f896bbf, 0xd67c8e4e, 0x1f8a6bbf, 0xd67c8e82,
0xcf8a6bbf, 0xd67c8ed7, 0x4f896bbf, 0xd67c8f0c,
0xef896bbf, 0xd67c8f4f, 0x8f896bbf, 0xd67c8f96,
0xef8a6bbf, 0xd67c8fdb, 0xcf896bbf, 0xd67c9008,
0xbf89000e, 0x80001006, 0xf0724646, 0xb45b0000,
0x00004646, 0xb45b0000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00004646, 0xb45b0000,
0x00004646, 0xb45b0000, 0x00008000, 0x00000001,
0x55550000, 0x0001aaaa, 0xaaaa0000, 0x01200003,
0x00620000, 0x01140002, 0x00010000, 0x8000ef40,
0x22560003, 0xe0042150, 0x0000
};
std::ofstream file;
file.open("test.omgwtf", std::ofstream::binary);
if(file.good())
{
file.write(reinterpret_cast<char*>(tab.data()), tab.size()*sizeof(std::uint_fast32_t));
file.close();
}
}
int main()
{
saveTestData();
std::ifstream file("test.omgwtf", std::ifstream::binary);
if(file.good())
{
file.seekg(0, file.end);
uint32_t length = file.tellg();
file.seekg(0, file.beg);
char *buffer = new char[length];
std::cout << "length = " << length << std::endl;
file.read(buffer, length);
std::uint_fast32_t *number32 = reinterpret_cast<std::uint_fast32_t*>(buffer);
std::cout << "1 number32 = " << *number32 << std::endl; // ok
number32 = reinterpret_cast<std::uint_fast32_t*>(buffer+4);
std::cout << "2 number32 = " << *number32 << std::endl; // ok
// read 0xbe6b (16 bits not 32)
// 0xbe6b (hex) = 48747 (dec)
std::uint_fast16_t *number16 = reinterpret_cast<std::uint_fast16_t*>(buffer+8);
std::cout << "3 number16 = " << *number16 << std::endl; // not ok!? why?
// read 2248 (16 bits not 32)
// 2248 (hex) = 8776 (dec)
number16 = reinterpret_cast<std::uint_fast16_t*>(buffer+10);
std::cout << "4 number16 = " << *number16 << std::endl; // not ok!? why?
file.close();
delete [] buffer;
}
return 0;
}
How to read number16? 1,2 example are ok. 3 example should be 48747 not 3194692168? 4 example should be 8776 not 1158725227?
clear; g++ test2.cpp -std=c++11 -o test2; ./test2
Upvotes: 0
Views: 201
Reputation: 283684
std::binary
is badly named, it really controls newline translation.
iostreams are only for text. You can have text with no newline translation (use std::binary
, the file ends up with Unix newline convention, \n
only) or text with newline translation (don't use std::binary
, the file ends up following the OS convention, such as \n
, \r\n
, or even \r
).
But even with std::binary
, the EOF character (ASCII 26) might be recognized and end input. Or not. The standard doesn't say. The standard doesn't provide any mechanism for untranslated file access.
People keep trying to design a better C++ standard I/O mechanism that separates file access from text handling, but no one has yet made everyone happy.
For binary files, use a low-level I/O mechanism. Even <stdio.h>
is better (fewer translations) than iostreams, but it still has some. OS-specific functions, or cross-platform wrappers that use the OS functions underneath (like boost::asio
) are what you need for binary file access.
In addition, you have strict aliasing violations all over the place. Don't use reinterpret_cast
like that, instead use memcpy
or read blocks of the correct size individually from the input file.
Finally, you are reading the wrong size variables. uint_fast16_t
is not 16 bits, it is 16 bits or more, whatever is fastest. Almost certainly 32 bits is faster on your CPU than 16 bits. If you want exactly 16 bits, use uint16_t
. If you want as close as possible (but not less) use uint_least16_t
. uint_fast
family of types are good for local variables such as loop counters. They are useless for I/O because of the unknown size.
And once you figure all that out, you need to worry about endianness of your original data, since it is written as a sequence of 32-bit (or more) values, whether the high or low half get written to the file first is platform dependent.
Upvotes: 2