Reputation: 580
Today in class I decided I was going to write the date in binary code, but when thinking how to write "2015" I inmediatly realized it was going to be a big number exactly: 11111011111. I understand that each bit is the space a 0 or a 1 can take, and a byte is conformed of 8 bits, but only 7 are used to write numbers(I don't know why), and 2015 in binary takes more than 7 spaces or even 8, so it ocuppies more than a byte, so how do computers manage numbers bigger than 255(ASCI biggest number), for example in a division, how they divide 2 bytes, or how do they make the 2 bytes 1?
Maybe I have a wrong idea, so I would like you guys to explain this for the community (tell me I'm not the only one with this doubt).
Upvotes: 2
Views: 1626
Reputation: 36597
All values are "seen" by a computer as a set of bits. If more bits are used to represent a value, more distinct values can typically be represented.
A char
is not necessarily required to be 8 bits. However, a unsigned char
is required (by the standards) to represent at least the set of values between 0
and 255
, while a signed char
is required to represent at least the set of values between -127
and 127
. Those ranges, in binary, require a minimum of 8 bits in each. It is implementation defined whether a char
is signed
or unsigned
, and there is nothing stopping an implementation supporting a larger ranges of values than the standard requires. There is no restriction that only seven bits of char
types are used to write values - but a number of commonly used character sets (e.g. ANSI) only need values between 0 and 127, so novices often make the mistake of thinking that is some requirement on C. It is not.
As to representing larger ranges of values than a char
- quite simply, there is nothing stopping a variable having the same size as multiple characters. The minimum requirement for an int
is to support the range -32767 to 32767, which requires 16 bits on a binary machine, equal to a pair of char
s that are consecutive in memory (one after the other). It is not uncommon for modern implementations to support a 32 bit int
(consisting of a set of 4 consecutive 8-bit characters) - and therefore support a range of (at least) -2147483647
to 2147483647
.
There is nothing magical about adding, subtracting, multiplying, dividing values that are represented by multiple bits. The basic principles that apply to operations involving a single bit extends to a variable represented using a set of bits. There is a bit of book-keeping needed (e.g. carry bits when adding, accounting for sign bits), and some limitations if the result cannot be stored (e.g. 127+127
cannot be stored in a signed char
that consists of only 8 bits).
To discuss how a binary computer does all these things in hardware, you will need to understand basic electronics (transistors, circuits, etc). The basic building block for everything is a transistor, and connections (wiring, resistors, capacitors, etc) between them. Essentially (very over-simplistically) each distinct bit requires a transistor (different voltages across different pins determine if the bit is on or off), and a multi-bit variable is represented using a set of transistors. Similarly, operations on each bit are implemented using logic circuits (gates, etc) and operations on multi-bit variables are implemented by logic circuits which either perform the same operation on each pair of bits in two variables in sequence (if only one logic circuit) or in parallel (by simply replicating the circuits, with one circuit for each bit, and a bit of additional circuitry for bookkeeping). Since modern processors consist of many billions of transistors (and other circuit elements), there is a fair amount of freedom in digital hardware design to represent variables using multiple bits and implement operations on variables in various ways.
Upvotes: 0
Reputation: 15134
They store them in multiple bytes, usually 4 (32-bit arithmetic) or 8 (64-bit arithmetic). Floating-point numbers are a little more complicated, but basically have the form ±x·2^y. Sometimes people write their own classes that can handle larger numbers by breaking them down into chunks that the hardware can handle. You can do addition of really big numbers the same way you learned in grade school, but in base 4,294,967,295: add each column together from right to left, and if the result is too wide, carry the 1.
Sometimes, numbers can be either positive or negative, and you need to use one bit for that. Sometimes, though, the bits represent positive numbers only. So you have the choice between a byte being able to represent the range [-128,127] or the range [0,255].
Upvotes: 2