Reputation: 47995
I used to think that float
can use max 6 digit and double
15, after comma. But if I print limits here:
typedef std::numeric_limits<float> fl;
typedef std::numeric_limits<double> dbl;
int main()
{
std::cout << fl::max_digits10 << std::endl;
std::cout << dbl::max_digits10 << std::endl;
}
It prints float
9 and double
17?
Upvotes: 1
Views: 247
Reputation: 6936
There isn't an exact correspondence between decimal digits and binary digits.
IEEE 754 single precision uses 23 bits plus 1 for the implicit leading 1. Double precision uses 52+1 bits.
To get the equivalent decimal precision, use
log10(2^binary_digits) = binary_digits*log10(2)
For single precision this is
24*log10(2) = 7.22
and for double precision
53*log10(2) = 15.95
See here and also the Wikipedia page which I don't find to be particularly concise.
Upvotes: 0
Reputation: 10665
From cppreference:
Unlike most mathematical operations, the conversion of a floating-point value to text and back is exact as long as at least max_digits10 were used (9 for float, 17 for double): it is guaranteed to produce the same floating-point value, even though the intermediate text representation is not exact. It may take over a hundred decimal digits to represent the precise value of a float in decimal notation.
For example (I am using http://www.exploringbinary.com/floating-point-converter/ to facilitate the conversion) and double as the precision format:
1.1e308 => 109999999999999997216016380169010472601796114571365898835589230322558260940308155816455878138416026219051443651421887588487855623732463609216261733330773329156055234383563489264255892767376061912596780024055526930962873899746391708729279405123637426157351830292874541601579169431016577315555383826285225574400
Using 16 significant digits:
1.099999999999999e308 => 109999999999999897424000903433019889783160462729437595463026208549681185812946033955861284690212736971153169019636833121365513414107701410594362313651090292197465320141992473263972245213092236035710707805906167798295036672550192042188756649080117981714588407890666666245533825643214495197630622309084729180160
Using 17 significant digits:
1.0999999999999999e308 => 109999999999999997216016380169010472601796114571365898835589230322558260940308155816455878138416026219051443651421887588487855623732463609216261733330773329156055234383563489264255892767376061912596780024055526930962873899746391708729279405123637426157351830292874541601579169431016577315555383826285225574400
which is the same as the original
More than 17 significant digits:
1.09999999999999995555e308 => 109999999999999997216016380169010472601796114571365898835589230322558260940308155816455878138416026219051443651421887588487855623732463609216261733330773329156055234383563489264255892767376061912596780024055526930962873899746391708729279405123637426157351830292874541601579169431016577315555383826285225574400
Continue to be the same as the original.
Upvotes: 2
Reputation:
You're confusing digits10
and max_digits10
.
If digits10
is 6, then any number with six decimal digits can be converted to the floating point type, and back, and when rounded back to six decimal digits, produces the original value.
If max_digits10
is 9, then there exist at least two floating point numbers that when converted to decimal produce the same initial 8 decimal digits.
digits10
is the number you're looking for, based on your description. It's about converting from decimal to binary floating point back to decimal.
max_digits10
is a number about converting from binary floating point to decimal back to binary floating point.
Upvotes: 3