Chet
Chet

Reputation: 19829

Why does floating point error change based on the position of the decimal?

I ran into a floating point error today (verified in JavaScript and Python) that seems peculiar.

> 15 * 19.22
288.29999999999995

What really weird about this scenario is that these numbers are well within the range of numbers representable by floating point. We're not dealing with really large or really small numbers.

In fact, if I simply move the decimal points around, I get the correct answer without rounding error.

> 15 * 19.22
288.29999999999995
> 1.5 * 192.2
288.29999999999995
> .15 * 1922.
288.3
> 150 * 1.922
288.3
> 1500 * .1922
288.3
> 15 * 1922 / 100
288.3
> 1.5 * 1.922 * 100
288.3
> 1.5 * .1922 * 1000
288.3
> .15 * .1922 * 10000
288.3

Clearly there must be some intermediate number that isn't representable with floating point, but how is that possible?

Is there a "safer" way of multiplying floating point numbers to prevent this issue? I figured that if the numbers were of the same order of magnitude, then floating point multiplication would work the most accurately but clearly that is a wrong assumption.

Upvotes: 2

Views: 140

Answers (3)

Rudy Velthuis
Rudy Velthuis

Reputation: 28806

It is because how binary floating point is represented. Take the values 0.8, 0.4, 0.2 and 0.1, as doubles. Their actually stored values are:

0.8 --> 0.8000000000000000444089209850062616169452667236328125
0.4 --> 0.40000000000000002220446049250313080847263336181640625
0.2 --> 0.200000000000000011102230246251565404236316680908203125
0.1 --> 0.1000000000000000055511151231257827021181583404541015625

As you can easily see, the difference with the exact decimal value halves each time you halve the number. That is because all of these have the exact same significand, and only a different exponent. This gets clearer if you look at their hex representations:

0.8 --> 0x1.999999999999ap-1
0.4 --> 0x1.999999999999ap-2
etc...

So the difference between the real, mathematical value and the actually stored value is somewhere in and under that last bit. That bit gets a smaller value the lower the exponent goes. And it goes up the other way: 1.6 is 0x1.999999999999ap+0, etc. The higher you go, the larger the value of that difference will become, because of that exponent. That is why it is called a relative error.

And if you shift the decimal point, you are in fact changing the binary exponent as well. Not exactly proportionally, because we are dealing with different number bases, but pretty much "equivalently" (if that is a proper word). The higher the number, the higher the exponent, and thus the higher the value of the difference between mathematical and floating point value becomes.

Upvotes: 1

user1196549
user1196549

Reputation:

Not an answer, but a long comment.

The order of magnitude is not the guilty, as the floating-point representation normalizes the numbers to a mantissa between 1 and 2:

  15       = 1.875     x 2^3
  19.22    = 1.20125   x 2^4
 150       = 1.171875  x 2^7
   0.1922  = 1.5376    x 2^-3

and the exponents are processed separately.

Upvotes: 0

T.J. Crowder
T.J. Crowder

Reputation: 1074138

Why does floating point error change based on the position of the decimal?

Because you're working in base 10. IEEE-754 double-precision binary floating point works in binary (base 2). In that representation, for instance, 1 can be represented exactly, but 0.1 cannot.¹

What really weird about this scenario is that these numbers are well within the range of numbers representable by floating point. We're not dealing with really large or really small numbers.

As you can see from my statement above, even just going to tenths, you run into imprecise numbers without having to go to outrageous values (as you do to get to unrepresentable whole numbers like 9,007,199,254,740,993). Hence the famous 0.1 + 0.2 = 0.30000000000000004 thing:

console.log(0.1 + 0.2);

Is there a "safer" way of multiplying floating point numbers to prevent this issue?

Not using built-in floating point. You might work only in whole numbers (since they're reliable from -9,007,199,254,740,992 through 9,007,199,254,740,992) and then when outputting, insert the relevant decimal. You might find this question's answers useful: How to deal with floating point number precision in JavaScript?.


¹ You may be wondering why, if 0.1 isn't represented exactly, console.log(0.1) outputs "0.1". It's because normally with floating point, when converting to string, only enough digits are output to differentiate the number from its nearest representable neighbor. In the case of 0.1, all that's needed is "0.1". Converting binary floating point to representable decimal is quite complicated, see the various notes and citations in the spec. :-)

Upvotes: 3

Related Questions