mkmostafa
mkmostafa

Reputation: 3171

return type of access to bitfield

#include <iostream>
#include <type_traits>

struct C
{
    uint32_t x : 2;
    bool y : 2;
};

int main()
{
    C c{0b1};

    std::cout << (static_cast<uint32_t>(0b1) << 31) << std::endl;
    std::cout << (c.x << 31) << std::endl;
    std::cout << (c.x << 10) << std::endl;
    std::cout << std::boolalpha << std::is_same_v<decltype(c.x), uint32_t> << std::endl;
    std::cout << std::boolalpha << std::is_same_v<decltype(c.y), bool> << std::endl;
}

Compile

g++ -g  test.cpp -std=c++17

g++ (GCC) 8.2.0

Output

2147483648
-2147483648
1024
true
true

My question here is regarding the type of the expression c.x where x is a 2bit bitfield member. According to the typetraits check I got the same type as was declared in the class definition, however it seems that this is not true at runtime since when I try to set the last bit through shifting I get a negative number. Any ideas?

Upvotes: 1

Views: 420

Answers (1)

KamilCuk
KamilCuk

Reputation: 141553

From C++draft 2019-04-12 conv.prom 7.3.6p5:

7.3.6 Integral promotions

A prvalue for an integral bit-field ([class.bit]) can be converted to a prvalue of type int if int can represent all the values of the bit-field;

From C++draft 2019-04-12 expr.shift 7.6.7p1:

7.6.7 Shift operators

The shift operators << and >> group left-to-right.
...
The operands shall be of integral or unscoped enumeration type and integral promotions are performed.

The typeid(c.x) is uint32_t, however when using << operator, it is implicitly converted to int.

c.x is 0x1. The expression c.x << 31 is 0x1 << 31 is 0x80000000 (assuming sizoef(int) == 4 and CHAR_BIT == 8). This number is interpreted as an int and in twos complement format it is equal to -2147483648 (INT_MIN or that std::intergral_limits<int>::min()).

Note that the expression c.x << 31 currently (C++17) invokes undefined behavior because of signed integer overflow.

Moreover what is the significance of the type declared in the class definition then?

Padding. Some compilers interpret different types of bitfields as "padding separators" (don't know how to name it). If the next member in the struct has a different type then the previous one (both being a bitfield), then I would expect the compiler to place the second member starting from a new "fresh" byte. I would expect c.x and c.y to have bits padding between them, as they have different type. If it would be struct C { uint32_t y : 2; uint32_t x : 2; } then it would be more likely for the compiler to put them inside the same byte. Refer to your compiler documentation or other resources.

Upvotes: 3

Related Questions