Dimitrije Vukajlovic
Dimitrije Vukajlovic

Reputation: 31

Confused by unsigned short behavior in C++

I am currently attempting to learn C++. I usually learn by playing around with stuff, and since I was reading up on data types, and then ways you could declare the value of an integer (decimal,binary,hex etc), I decided to test how "unsigned short"s worked. I am now confused.

Here is my code:

#include <cstdio> 

int main(){
  unsigned short a = 0b0101010101010101;
  unsigned short b = 0b01010101010101011;
  unsigned short c = 0b010101010101010101;
  printf("%hu\n%hu\n%hu\n", a, b, c);
}

Integers of type "unsigned short" should have a size of 2 bytes across all operating systems. I have used binary to declare the values of these integers, because this is the easiest way to make the source of my confusion obvious.

Integer "a" has 16 digits in binary. 16 digits in a data type with a size of 16 bits (2 bytes). When I print it, I get the number 21845. Seems okay. Checks out.

Then it gets weird.

Integer "b" has 17 digits. When we print it, we get the decimal version of the whole 17 digit number, 43691. How does a binary number that takes up 17 digits fit into a variable that should only have 16 bits of memory allocated to it? Is someone lying about the size? Is this some sort of compiler magic?

And then it gets even weirder. Integer "c" has 18 digits, but here we hit the upper limit. When we build, we get the following error:

/home/dimitrije/workarea/c++/helloworld.cpp: In function ‘int main()’:
/home/dimitrije/workarea/c++/helloworld.cpp:6:22: warning: large integer implicitly truncated to unsigned type [-Woverflow]
   unsigned short c = 0b010101010101010101;

Okay, so we can put 17 digits in 16 bits, but we can't put in 18. Makes some kind of sense I guess? Like we can magic away 1 digit but two wont work. But the supposed "truncation", rather than truncating to the actual maximum value, 17 digits (or 43691 in this example), truncates to what the limit logically should be, 21845.

This is frying my brain and I'm too far into the rabbit whole to stop now. Does anyone understand why C++ behaves this way?

---EDIT---

So after someone pointed out to me that my binary numbers started with a 0, I realized I was stupid.

However, when I took the 0 from the left hand side and carried it right (meaning that a,b c were actually 16,17,18 bits respectively), I realized that the truncating behaviour still doesn't make sense. Here is the output:

43690 21846 43690

43960 is the maximum value for 16 bits. I could've checked this before asking the original question and saved myself some time.

Why does 17 bits truncate to 15, and 18 (and also 19,20,21) truncate to 16?

--EDIT 2---

I've changed all the digits in my integers to 1, and my mistake makes sense now. I get back 65535. I took the time to type 2^16 into a calculator this time. The entirety of my question was a result of the fact that I didn't properly look at the binary value I was assigning.

Thanks to the guy who linked implicit conversions, I will read up on that.

Upvotes: 3

Views: 1129

Answers (1)

GuidedHacking
GuidedHacking

Reputation: 3923

On most systems a unsigned short is 16 bits. No matter what you assign to a unsigned short it will be truncated to 16 bits. In your example the first bit is a 0 which is essentially being ignored, in the same way int x = 05; will just equal 5 and not 05.

If you change the first bit from a 0 to a 1, you will see the expected behaviour of the assignment truncating the value to 16 bits.

The range for an unsigned short int (16 bits) is 0 to 65535

65535 = 1111 1111 1111 1111 in binary

Upvotes: 1

Related Questions