David
David

Reputation: 2556

Computations in C# as accurately as with Windows Calculator

When I do the following double multiplication in C# 100.0 * 1.005 I get 100,49999999999999 as a result. I believe this is because the exact number (or some intermedia result when evaluting the expression) cannot be represented. When I do the same computation in calc.exe I get 100.5 as expected.

Another example is the ninefold incrementation of 0.001 (that is the first time a deviation occurs) so basically 9d * 0.001d = 0,0090000000000000011. When I do the same computation in calc.exe I get 0.009 as expected.

Now one can argue, that I should choose decimal instead. But with decimal I get the problem with other computations for example with ((1M / 3M) * 3M) = 0,9999999999999999999999999999 while calc.exe says 1.

With calc.exe I can divide 1 by 3 several times until some real small number and then multiply with 3 again as many several times and then I reach exacty 1. I therefore suspect, that calc.exe computes internally with fractions, but obviously with real big ones, because it computes

(677605234775492641 / 116759166847407000) + (932737194383944703 / 2451942503795547000)

where the common denominator is -3422539506717149376 (an overflow occured) when doing a long computation, so it must be at least ulong. Does anybody know how computation in calc.exe is implemented? Is this implementation made somewhere public for reuse?

Upvotes: 2

Views: 281

Answers (1)

Joey
Joey

Reputation: 354714

As described here, calc uses an arbitrary-precision engine for its calculations, while double is standard IEEE-754 arithmetic, and decimal is also floating-point arithmetic, just in decimal, which, as you point out, has the same problems, just in another base.

You can try finding such an arbitrary-precision arithmetic library for C# and use it, e.g. this one (no idea whether it's good; was the first result). The one inside calc is not available as an API, so you cannot use it.

Another point is that when you round the result to a certain number of places (less than 15), you'd also get the intuitively "correct" result in a lot of cases. C# already does some rounding to hide the exact value of a double from you (where 0.3 is definitely not exactly 0.3, but probably something like 0.30000000000000004). By reducing the number of digits you display you lessen the occurrence of such very small differences from the correct value.

Upvotes: 10

Related Questions