TheTechGuy
TheTechGuy

Reputation: 17354

How the compiler displays *inaccurate* value of float?

I would like to know the mechanism why the compiler shows inaccurate value of float. Example

float a = 0.056;
printf("value = %f",a); // this prints "value = 0.056"

If you try to store 0.056 in binary floating point format, you get this (use this link for conversion)

0.00001110010101100000010000011000 which is equal to 0.0559999998658895

1. How the compiler shows 0.056 while it should show 0.055999999?

Lets take this example a little further

#include <stdio.h>
main()
{
float a, b;

a = 0.056;
b = 0.064; // difference is 0.08 

printf("a=%f, b=%f",a,b);

if( b - a == 0.08) // this fails
    printf("\n %f - %f == %f subtraction is correct",b,a,b-a); 
else
    printf("\n%f - %f != %f Subtraction has round-off error\n",b,a,b-a);
}

Note that else block gets execute here while we expect if block to be correct. Here is the output.

a=0.056000, b=0.064000
0.064000 - 0.056000 != 0.008000 Subtraction has round-off error

Again the values are shown the way we expect (with no round off error) but these values do have round off errors but false disguised values are shown. My second question is

2. Is there a way to show the actual value of the stored number rather than the disguised one that we entered?

Note: I have include C code in Visual Studio 2008 but it should be reproducible in any language.

Upvotes: 2

Views: 570

Answers (5)

Ricardo C&#225;rdenes
Ricardo C&#225;rdenes

Reputation: 9172

I see a lot of talking about printf and how it is printing stuff "the wrong way" because it rounds things up, etc. printf is printing exactly what you expect, when you notice that the actual number stored in a is 0.05600000172853469848.

OP is assuming that the number stored in there is 0.0559999..., but a look to the actual number shows that's wrong:

#include <stdio.h>

int main() {
        float a = 0.056;
        printf("%A\n", a);
}

That will print 0X1.CAC084P-5, meaning our mantissa (0xCAC084) is 110010101100000010000100. That's 24 bits, though, not the 23 we can store in a 32 bits (IEEE-754 single precision) floating point, meaning that what's in there is actually 11001010110000001000010

Remember that the mantissa is normalized and assumed to start by 1 so, applying the exponent, etc, our number is:

0.0000111001010110000001000010

which translates to 0.05600000172853469848

OP assumed this, instead:

0.00001110010101100000010000011

which, certainly is more accurate, BUT that requires a bit more than what the mantissa can store, so we'd end up with this:

0.0000111001010110000001000001

or 0.05599999800324440002.

Of course, none of numbers are 0.056, but the error in the representation is higher for the latter one! So no surprise that we're getting what we're getting...

Upvotes: 2

Drew Dormann
Drew Dormann

Reputation: 63775

You are making the mistake of assuming that your float was ever accurate.

They are not designed to represent a precise value such as 0.0559999998658895. Libraries such as GMP exist for that.

Floats are designed to be fast and approximate.

In your example, 0.056 is displayed because the digits 0.0559999 are presumed to be accurate and the digits that follow 99865889... are considered mostly noise and only significant enough to round up 0.0559999 to 0.056.

printf doesn't know that you consider 0.056 to be "correct". It just knows that a float printed in human-readable format is only accurate to about 6 significant digits, and 0.0560000 represents the closest match using that many digits.

Upvotes: 1

kennytm
kennytm

Reputation: 523304

The compiler doesn't show anything 😉 Your program shows 0.056 because %f only show the result up to 6 digits. Try %.16f if you want to see all the inaccuracies (Result: http://ideone.com/orrkk).

The manpage of printf shows many other options you can use with these specifiers.

Upvotes: 11

pmg
pmg

Reputation: 108938

2) If you have a C99 library, try printing the double in hexadecimal

printf("%A\n", 56.0/100);

Upvotes: 1

Michael Borgwardt
Michael Borgwardt

Reputation: 346309

In most languages, the routines for printing float values actually print the shortest decimal number that is closer to the float value to be printed than to any other float value. This often (but not always) masks the rounding errors resulting from translation of decimal literals to float values.

Upvotes: 4

Related Questions