user1132648
user1132648

Reputation:

Float and Int Both 4 Bytes? How Come?

Int is 4 bytes with a range of +- 2^31 Float is 4 bytes with a range of +- 1.2E(+- 38)

Float encompasses so many more points on the Real Line and yet is equal to the size of int. Is the Sign-Exponent-Fraction representation of float so awesome (or the 2's complement of Int so pathetic) that this size disparity arises? Am I missing something?

I just find it highly surprising that something which represents (virtually) the entire real line is of the same size as that of which represents the Integers.

Upvotes: 39

Views: 70619

Answers (7)

Brian
Brian

Reputation: 3558

In C, an int and float each take up 4 bytes, or 32 bits, in memory. The difference between int and float is not the number of bits they take up in memory, but rather in the way the ALU (Arithmetic Logic Unit) treats each number. An int is treated as the integer represented by its bits using two's complement notation. A float, on the other hand, is encoded (typically in IEEE 754 format) to represent a number in exponential form (e.g. 2.9979245x108 is a number in exponential form with base 2.9979245 and exponent 8). The number of significant digits in the decimal representation of a floating point number is always about the same: 6-9 digits for 32-bit float and 15-17 digits for a 64-bit float (or double). For example, the value 4.2949673x109 is what I get as the decimal representation of the closest 32-bit float to the number 232 , and it has 8 significant digits.

It can be a common misconception that a floating point number somehow represents more information than a fixed-point number, which is not true. While a 32-bit float can represent numbers of greater magnitude than a 32-bit int, it cannot represent all of them with as much precision. In total, this answer derives that a 32-bit float can represent around 4,278,190,081 unique numbers. This other answer contains a nice plot showing how these 32-bit floats are distributed on the real line.

While an int may not be able to represent numbers with as large of a magnitude as a float, it represents each number in its valid range (-232 to 232-1) with the same precision. The largest number it can represent is 2,147,483,647 and has 10 significant digits. And in total, it can represent 232=4,294,967,296 unique numbers, which is more than for the 32-bit float!

Upvotes: 63

LinconFive
LinconFive

Reputation: 2012

In fact, the length of a certain data type differs on platforms. Yet let us say now float and int are same, 4 bytes.

4 Bytes equal 32 bits, and in binary, the range is

00000000000000000000000000000000

to

11111111111111111111111111111111

At this point, we should recall geometric sequence, the sum above equals

2^32 + 2^31 + ... + 2^1 + 2^0 = 2147483648

Notice, C redfines the range from math to language, as unsigned or signed via shift.

That is to say it can also define float.

SEEEEEEEEMMMMMMMMMMMMMMMMMMMMMMM

S = sign (1 bit)

E = exponent (8 bits)

M = mantissa (23 bits)

The value calculation follows IEEE-754, which is not easy, but we can try at 754 converter :)

Now, the range, (see the exponent trick? It means E-127, minus!)

0 00000001 00000000000000000000000

to

0 11111110 11111111111111111111111

From math,

2^-126 * 1.00000000000000000000000 = 1.1754943508222875 × 10^-38

to

2^(128) * 1.11111111111111111111111 = 3.7809151880104275 x 10^38

Notice there is a hidden 1 for mantissa, otherwise you always get 0 :<

At last machines are built for saving life at the very begining, so before we choose the data type, we better checkout the range, from the header files, e.g.

#include <stdio.h>
#include <limits.h>
#include <float.h>

int main()
{
    printf("Range of signed char %d to %d\n", SCHAR_MIN, SCHAR_MAX);
    printf("Range of unsigned char 0 to %d\n\n", UCHAR_MAX);

    printf("Range of signed short int %d to %d\n", SHRT_MIN, SHRT_MAX);
    printf("Range of unsigned short int 0 to %d\n\n", USHRT_MAX);

    printf("Range of signed int %d to %d\n", INT_MIN, INT_MAX);
    printf("Range of unsigned int 0 to %lu\n\n", UINT_MAX);

    printf("Range of signed long int %ld to %ld\n", LONG_MIN, LONG_MAX);
    printf("Range of unsigned long int 0 to %lu\n\n", ULONG_MAX);

    // In some compilers LLONG_MIN, LLONG_MAX
    printf("Range of signed long long int %lld to %lld\n", LLONG_MIN, LLONG_MAX); 
    // In some compilers ULLONG_MAX
    printf("Range of unsigned long long int 0 to %llu\n\n", ULLONG_MAX); 

    printf("Range of float %e to %e\n", FLT_MIN, FLT_MAX);
    printf("Range of double %e to %e\n", DBL_MIN, DBL_MAX);
    printf("Range of long double %e to %e\n", LDBL_MIN, LDBL_MAX);

    return 0;
}

Upvotes: 1

thunde47
thunde47

Reputation: 1

Standard for floating point as per standards requires a base=2 instead of base=10 and 24 digits (precision). Since it is not base 10, the base 2 format is capable of representing only a definite set of real numbers (which is the inherent reason of error in representing real numbers). It also means, as you though earlier, that float is not representing many more numbers as compared to int.

Upvotes: 0

NPE
NPE

Reputation: 500327

I just find it highly surprising that something which represents (virtually) the entire real line is of the same size as that of which represents the Integers.

Perhaps this will become less surprising once you realize that there are lots of integers that a 32-bit int can represent exactly, and a 32-bit float can't.

A float can represent fewer distinct numbers than an int, but they're spread over a wider range.

It is also worth noting that the spacing between consecutive floats becomes wider as one moves away from zero, whereas it remains constant for consecutive ints.

Upvotes: 32

CLo
CLo

Reputation: 3730

I believe the important point here is that int is exact while float may be rounded. A portion of the data in a float describes the location of the decimal, while another portion determines the value. So while you may be able to show 1.2E38, only the first several digits may be correct and the rest may be filled with 0's.

From: http://en.wikipedia.org/wiki/Floating_point

"with seven decimal digits could in addition represent 1.234567, 123456.7, 0.00001234567, 1234567000000000, and so on"

It depends on how the particular system implements floats.

Upvotes: 2

Eugen Rieck
Eugen Rieck

Reputation: 65274

Both types represent the same amount of points on the real line - they are just spaced differently. The difference between the highest and the second-highest float is ca. 256!

Upvotes: 0

NiklasMM
NiklasMM

Reputation: 2972

Maybe you should learn how floating point numbers are represented in a computer. It is very different from the way integers are represented.

Also it is worth noting that an int is not always 4 byte long, it depends on the system.

Upvotes: 0

Related Questions