Reputation: 35
I am trying to get a custom bit field I tried this method:
struct foo
{
unsigned z : 10;
unsigned y : 16;
unsigned x : 1;
unsigned w : 16;
};
int main()
{
foo test({0x345, 0x1234, 0x1 ,0x1234});
char bytes[8] = {0};
std::cout << sizeof(test) << std::endl;
memcpy(bytes, &test, 8);
std::cout << sizeof(bool) << std::endl;
for (int i = 0; i < sizeof(bytes) / sizeof(char); i++)
{
std::cout << std::bitset<8>(bytes[sizeof(bytes) / sizeof(char) - i - 1]);
}
std::cout << "" << std::endl;
return 0;
}
With the test I am trying it returns me:
0000000000000000000100100011010000000100010010001101001101000101
(00000000000000000 | 0010 010 0011 0100 | 000001 | 0001 0010 0011 0100 |11 0100 0101
should correspond to: 0x1234 |0x1 | 0x1234 | 0x345
)
I am reading it from right to left, in the right side I have the 10
first bits (
11 0100 0101
), then I have next 16
bits (0001 0010 0011 0100
). After that field I am expecting just one
bit for the next data, but I have 6
bits (000001
) instead of (1
) before the last 16
bits (0001 0010 0011 0100
).
Do you have any insight for this please ?
Upvotes: 1
Views: 1145
Reputation: 3678
You have 5 spare bits, because the next bitfield occupies too much space to fit inside the remaining space (unsigned
is 8 bits)
#include <cstdint> // types with fixed bit sizes
// force to remove padding
#pragma pack(push, 1)
struct foo
{
// make bitsets occupy one address space
uint32_t z : 10;
uint32_t y : 16;
uint32_t x : 1;
// until now you have 27 bits, another 16 will not fit,
// thus adding another 5 bits for padding. Nothing you can do.
uint32_t w : 16; // or you can have uint16_t
//
};
#pragma pack(pop)
Also, bitsets can't share address space of different types of neighboring members.
Upvotes: 1