Reputation: 304007
The endianness of bitfields is implementation defined. Is there a way to check, at compile time, whether via some macro or other compiler flag, what gcc's bitfield endianness actually is?
In other words, given something like:
struct X {
uint32_t a : 8;
uint32_t b : 24;
};
Is there a way for me to know at compile time whether or not a
is the first or last byte in X
?
Upvotes: 11
Views: 8178
Reputation: 2836
It might be of some interest that when the bitfields are multiples of 8-bits across, it appears that endianness of the arcitecture does not matter.
See here [godbolt.org]
I chose the arm architecture in this godbolt example because that supports both big and little endian, and it is easy to compare the differences.
Note that whether the architecture is big or small endian, in both cases the 8-bit field is at the start of the struct.
I tested all of the compilers on godbolt that could generate readable assembly code for the is_8bit_tag_at_start function, and they all appeared to return true.
Upvotes: 2
Reputation: 225637
On Linux systems, you can check the __BYTE_ORDER
macro to see if it is __LITTLE_ENDIAN
or __BIG_ENDIAN
. While this is not authoritative, in practice it should work.
A hint that this is the right way to do it is in the definition of struct iphdr
in netinet/ip.h, which is for an IP header. The first byte contains two 4-bit fields which are implemented as bitfields, so the order is important:
struct iphdr
{
#if __BYTE_ORDER == __LITTLE_ENDIAN
unsigned int ihl:4;
unsigned int version:4;
#elif __BYTE_ORDER == __BIG_ENDIAN
unsigned int version:4;
unsigned int ihl:4;
#else
# error "Please fix <bits/endian.h>"
#endif
u_int8_t tos;
u_int16_t tot_len;
u_int16_t id;
u_int16_t frag_off;
u_int8_t ttl;
u_int8_t protocol;
u_int16_t check;
u_int32_t saddr;
u_int32_t daddr;
/*The options start here. */
};
Upvotes: 11