Donavon
Donavon

Reputation: 323

C# - Type Sizes

Lately I've been trying to learn C#, but I'm having trouble understanding something. Each integral type has a size (signed 8bit, unsigned 8bit, signed 16bit, unsigned 16bit, etc). I'm having a hard time understanding what exactly the sizes are, and how do they get that size. What does the 8bit, 16bit, 32bit, etc, mean? And signed and unsigned as well. I don't understand these. If anyone can refer me to a link with explanations on bits and signed and unsigned, or even explain it to me, that would be great. thanks

Upvotes: 1

Views: 773

Answers (2)

Erresen
Erresen

Reputation: 2043

All types are stored as bits on your computer.

If you open up Calculator and put it in Programmer Mode (Alt + 3), you can see how numbers (integers, anyway) are represented as bits.

Calculator in Programmer Mode

As you can see from the image above, 255 takes up bit 0 to bit 7 (the eight 1's in a row). 255 is the highest number you can represent in a 8 bit UNsigned integer. If you add 1 to 255 on a 8 bit type, you'd get an overflow error because 256 doesn't fit into 8 bits. In lower level languages, without overflow errors, 255 + 1 equals 0, as the values roll over.

Signed values use one bit to represent sign (positive or negative). So a signed 8 bit number could go from -128 to 127.

+------+-----+----------------------+----------------------+---------------------+
|      | unsigned                   | signed                                     |
+------+-----+----------------------+----------------------+---------------------+
| bits | min | max                  | min                  | max                 |
+------+-----+----------------------+----------------------+---------------------+
| 8    | 0   | 255                  | -128                 | 127                 |
| 16   | 0   | 65535                | -32768               | 32767               |
| 32   | 0   | 4294967295           | -2147483248          | 2147483647          |
| 64   | 0   | 18446744073709551615 | -9223372036854775808 | 9223372036854775807 |
+------+-----+----------------------+----------------------+---------------------+

Floating point numbers like floats and double are stored in a different way, which is not quite as easy to explain: https://en.wikipedia.org/wiki/Floating_point#Internal_representation

Basically, with integers more bits mean larger numbers, with floating point more bits can mean larger numbers and/or more precision (decimal places).

It's also worth noting that an int is signed, whereas a uint is unsigned. All floating point numbers are signed, due to their specification.

Useful links (from comments etc):

Upvotes: 6

StoicFnord
StoicFnord

Reputation: 193

The size determines how many bits are used in the storage of the type.

E.G 8bit int : 00000001 == 1

If a type is signed, then the first bit of the type determines if it is a positive or negative value

e.g 11111111 == -1 (Using something called two's complement. More detail in the link)

A quick rundown on signed types can be found here: http://kias.dyndns.org/comath/13.html

Upvotes: 2

Related Questions