Reputation: 653
I came across an operation with some confusion.
var a = 0.1;
var b = 0.2;
var c = 0.3;
console.log(a); // 0.1
console.log(b); // 0.2
console.log(c); // 0.3
But,
consolo.log(a+b+c) // 0.6000000000000001.
While
console.log(a+(b+c)) // 0.6
I understand that Javascript use binary floating point, thus can't accurate represent 0.1, 0.2, 0.3 But what does the bracket around (b+c). Is there any conversion or round up here?
Many thanks,
Upvotes: 2
Views: 4028
Reputation: 5468
JavaScript number is represented in IEEE754 which is double precision binary floating point (binary64), it is in scientific notation and using 2 as base. There are 64 bits in a number, and they are split into 3 parts (from high to low bits):
So, a float number is calculated as: (-1) ^ sign * (2 ^ exponent) * significand
Note: as the exponent part of a scientific notation could be either positive or negative, the actual exponent value for a binary64 number should be calculated by subtracting exponent bias (which is the middle value 1023) from the 11 bit exponent value.
The standard also defines the significand value to be between [1, 2).
As the first number of significand part is always 1, so it is implied and not presented in the above figure. So, basically the significand part has 53 bits precision actually, and the red part in above figure is just the mantissa or fraction part.
Based on the standard, it's not hard to find 0.1, 0.2 and 0.3 in binary64 format (you can calculate either manually or by this tool http://bartaz.github.io/ieee754-visualization/):
0.1
0 01111111011 1001100110011001100110011001100110011001100110011010
and in scientific notation, it is
1.1001100110011001100110011001100110011001100110011010 * 2e-4
Note: the significand part is in binary format, and the following numbers are in same format
0.2
0 01111111100 1001100110011001100110011001100110011001100110011010
and in scientific notation, it is
1.1001100110011001100110011001100110011001100110011010 * 2e-3
0.3
0 01111111101 0011001100110011001100110011001100110011001100110011
and in scientific notation, it is
1.0011001100110011001100110011001100110011001100110011 * 2e-2
Step 1 - Align the exponents
Step 2 - Add up the significand
if the added up significand is not satisfying [1,2)
requirement, shift it into that range and change the exponent
After shift, the significand should be round up.
As above explained, 0.1
has exponent -4
and 0.2
has exponent -3
, so need to do exponent alignment first:
Shift 0.1
from
1.1001100110011001100110011001100110011001100110011010 * 2e-4
to
0.1100110011001100110011001100110011001100110011001101 * 2e-3
Then add the significand
0.1100110011001100110011001100110011001100110011001101
with
1.1001100110011001100110011001100110011001100110011010
we get added up significand value:
10.0110011001100110011001100110011001100110011001100111
But it is not in range [1,2)
so need right shift it (with round up) to:
1.0011001100110011001100110011001100110011001100110100 (* 2e-2)
then add it to
0.3 (1.0011001100110011001100110011001100110011001100110011 * 2e-2)
we get:
10.0110011001100110011001100110011001100110011001100111 * 2e-2
Again, we need shift and round up it, and finally get the value:
1.0011001100110011001100110011001100110011001100110100 * 2e-1
it is exactly the value of 0.6000000000000001
(decimal)
With same workflow, you get calculate 0.1 + (0.2 + 0.3)
This web page http://bartaz.github.io/ieee754-visualization/ helps you quickly convert a decimal number to binary64 format, you can use it to verify the calculation steps.
If you are processing a single precision binary float number, you would refer to this tool: http://www.h-schmidt.net/FloatConverter/IEEE754.html
Upvotes: 7
Reputation: 817128
The general problem is described in Is floating point math broken?.
In the remainder I will just look at the difference between the two computations.
From my comment:
Well, in the first case you are doing (0.1 + 0.2) + 0.3 = 0.3 + 0.3 and in the second case you do 0.1 + (0.2 + 0.3) = 0.1 + 0.5. I guess the rounding error in the first case larger than in the second case.
Lets have a closer look at the actual values in this computation:
var a = 0.1;
var b = 0.2;
var c = 0.3;
console.log(' a:', a.toPrecision(21));
console.log(' b:', b.toPrecision(21));
console.log(' c:', c.toPrecision(21));
console.log(' a + b:', (a + b).toPrecision(21));
console.log(' b + c:', (b + c).toPrecision(21));
console.log(' a + b + c:', (a + b + c).toPrecision(21));
console.log('a + (b + c):', (a + (b + c)).toPrecision(21));
The output is
a: 0.100000000000000005551
b: 0.200000000000000011102
c: 0.299999999999999988898
a + b: 0.300000000000000044409
b + c: 0.500000000000000000000
a + b + c: 0.600000000000000088818
a + (b + c): 0.599999999999999977796
So, it's clear that both computations have rounding errors, but the errors are different because you are performing the additions in a different order. It just happens that a + b + c
produces a larger error.
The console seems to round the number to the 16th decimal:
> (a + b + c).toPrecision(16)
"0.6000000000000001"
> (a + (b + c)).toPrecision(16)
"0.6000000000000000"
That's why the second computation will simply output 0.6
. If the console would round to the 17th decimal, things would look different:
> (a + b + c).toPrecision(17)
"0.60000000000000009"
> (a + (b + c)).toPrecision(17)
"0.59999999999999998"
Upvotes: 4
Reputation: 3504
That's not problem of JavaScript, you would get similar surprises in other languages too.
Please read this: What Every Programmer Should Know About Floating-Point Arithmetic
Upvotes: 3