Reputation: 35
I've been learning a bit about bit fields and how they're stored The struct below partitions a 32 bit unsigned int into 3 components: x, y and z.
struct bit_num {
unsigned int x : 4,
y : 8,
z :20;
}
So the compiler determines how the fields will be placed within the unsigned int, and it's one of the 2:
[ x(4 bits) ][ y(8 bits) ][ z(20 bits) ]
or
[ z(20 bits) ][ y(8 bits) ][ x(4 bits) ]
My question is how do I determine using the struct above which of the 2 layouts is being used here? Thanks in advance
Upvotes: 0
Views: 211
Reputation: 67835
Wrap it into the union with the 4velements unsigned char array. In the program set one of the bitfields to the known value. Print the array and you will know.
Nothing else needed
Upvotes: 0
Reputation: 937
Here is an example program.
struct bit_num {
unsigned int x : 4,
y : 8,
z :20;
};
struct bit_num bm = {0x1, 0x23, 0x45678};
void main()
{
}
Compile it and use objdump -d -j .data a.out
to dump the .data
section.
000000000060102c <bm>:
60102c: 31 82 67 45
The bytes from low address to high address are 31 82 67 45
. Because the most significant bit (aka MSB) in binary number is the left-most bit, we write these four bytes from high address to low address:
45 67 82 31
Then we convert them to binary numbers:
01000101 01100111 10000010 00110001
This binary number can be grouped as follows:
MSB -------------------------- LSB
01000101011001111000 00100011 0001
(20bits) (8bits) (4bits)
z=0x45678 y=0x23 x=0x1
Then you can know that x
, y
and z
are placed in an unsigned int
from LSB to MSB.
Upvotes: 0
Reputation: 754800
You can use a union
to analyze the layout of the bit-fields. For example:
#include <stdio.h>
struct bit_num {
unsigned int x : 4,
y : 8,
z :20;
};
union bit_layout
{
unsigned int w;
struct bit_num b;
unsigned char c[sizeof(unsigned int)];
};
int main(void)
{
union bit_layout bl = { .b = { .x = 0xF, .y = 0xA5, .z = 0xC800C } };
printf(".x = 0x%X, .y = 0x%.2X, .z = 0x%.5X, .w = 0x%.8X\n",
bl.b.x, bl.b.y, bl.b.z, bl.w);
printf("c[0] = 0x%.2X, c[1] = 0x%.2X c[2] = 0x%.2X, c[3] = 0x%.2X\n",
bl.c[0], bl.c[1], bl.c[2], bl.c[3]);
return 0;
}
On a Mac running macOS 10.13.6 with GCC 8.2.0, I get the output:
.x = 0xF, .y = 0xA5, .z = 0xC800C, .w = 0xC800CA5F
c[0] = 0x5F, c[1] = 0xCA c[2] = 0x00, c[3] = 0xC8
So, the .x
member of the bit-field is stored in the least significant nybble; the .y
member is stored in the next 2 nybbles; and the .z
member is stored in the 5 most significant nybbles. However, the byte layout shows that the byte of the array with index 0 holds the .x
and the less significant half the .y
value; the byte with index 1 holds the more significant half of the .y
value and the least significant nybble of the the .z
value, and the remaining two bytes of the array contain the rest of the .z
value.
Upvotes: 0