markzzz
markzzz

Reputation: 47995

Aren't 6 digits guarantee in single precision?

Here's the code:

typedef std::numeric_limits<float> fl;

int main()
{   
    std::cout.precision(100);    

    float f1 = 9999978e3;
    std::cout << f1 << " (9999978e3 decimal number stored as binary)" << std::endl;

    float f2 = 9999979e3;
    std::cout << f2 << " (9999979e3 decimal number stored as binary)" << std::endl;

    float f3 = 9999990e3;
    std::cout << f3 << " (9999990e3 decimal number stored as binary)" << std::endl;

    std::cout << "-----^" << std::endl;
}

which prints:

9999978496 (9999978e3 decimal number stored as binary)
9999978496 (9999979e3 decimal number stored as binary)
9999989760 (9999990e3 decimal number stored as binary)
-----^ 

The decimal numbers stored as binary correctly keep the 6th digit for 9999978e3 and 9999979e3 (which is 7), but the 6th digit of 9999990e3 is 8 and not 9.

Shouldn't floating point precision always guarantee the first 6 digits?

Yes, if I round 89 I got 9, but that's not the same; this will work/make sense only "visually".

Later (on processing numbers), when I'll apply math on that value, it'll work on a xxxxx8 number (9999989760), and not on a xxxxx9. +1 at that magnitude.

Upvotes: 2

Views: 372

Answers (2)

Bathsheba
Bathsheba

Reputation: 234785

Not in the way you think it's guaranteed, no.

By way of offering a counter-example, for an IEEE754 single precision floating point, the closest number to

9999990000

is

9999989760

What is guaranteed is that your number and the float, when both are rounded to six significant figures, will be the same. This is be the value of FLT_DIG on your platform, assuming it implements IEEE754. E.g. the closest float number to 9999979000 is 9999978496.

See http://www.exploringbinary.com/floating-point-converter/

Upvotes: 9

Omnifarious
Omnifarious

Reputation: 56068

You will never get a precise number of digits base 10 because the number isn't be stored base 10 and most base 10 fractions have no perfect representation in base 2. There will almost always be a rounding error, and through repeated adding you can magnify that roundoff error.

For example, 1/5 has this binary pattern:

111110010011001100110011001101

We only care about the last 23 bits (the mantissa) for what you're talking about...

10011001100110011001101

Notice the repeating pattern of 1001. To truly represent 0.2 that pattern would have to repeat forever instead of ending with a roundoff of 1.

Multiply that number by enough and the roundoff error is magnified.

If you need to have a precise number of decimal digits, then use integer math and handle the rounding yourself in the case of division in a manner satisfactory to you. Or use a bigint library and rational numbers and end up with huge fractions that take forever to compute, but you'll have infinite precision. Though, of course, any number that can't be represented as a rational like the sqrt(6) or pi will still have a roundoff error.

Upvotes: 2

Related Questions