Mikhail T.
Mikhail T.

Reputation: 1330

Double comparison precision loss in C#, accuracy loss happening when adding subtracting doubles

Just started learning C#. I plan to use it for heavy math simulations, including numerical solving. The problem is I get precision loss when adding and subtracting double's, as well as when comparing. Code and what it returns (in comments) is below:

namespace ex3
{
    class Program
    {
        static void Main(string[] args)
        {

            double x = 1e-20, foo = 4.0;

            Console.WriteLine((x + foo)); // prints 4
            Console.WriteLine((x - foo)); // prints -4
            Console.WriteLine((x + foo)==foo); // prints True BUT THIS IS FALSE!!!
        }
    }
}

Would appreciate any help and clarifications!

What puzzles me is that (x + foo)==foo returns True.

Upvotes: 5

Views: 1981

Answers (4)

Henrique Moisés
Henrique Moisés

Reputation: 102

What you are looking for is probably the Decimal Structure (https://msdn.microsoft.com/en-us/library/system.decimal.aspx). Doubles can't right represent that kind of values with the precision you are looking for (C# double to decimal precision loss). Instead, try using the Decimal class, like so:

decimal x = 1e-20M, foo = 4.0M;

        Console.WriteLine(Decimal.Add(x, foo)); //prints 4,0000000000000000001
        Console.WriteLine(Decimal.Add(x, -foo)); //prints -3,9999999999999999999
        Console.WriteLine(Decimal.Add(x, foo) == foo); // prints false

Upvotes: 1

RaidenF
RaidenF

Reputation: 3531

This is not a problem with C#, but with your computer. It's not really complicated or hard to understand, but it's a long read. You should read this article, if you want an actual in depth knowledge of how your computer works.

An excellent TLDR site that imho is a better intro to the subject than the aforementioned article is this one:

http://floating-point-gui.de/


I'll provide you with a very short explanation of what's going on, but you should certainly read at least that site, to avoid trouble in the future, since your field of application will require such knowledge in depth.

What happens is the following: you have 1e-20, which is a smaller number than 1.11e-16. That other number, is called the machine epsilon for double precision on your computer (most likely). If you add a number equal to or larger than 1, to something smaller than the machine epsilon, it will be rounded off, back to the large number. That's due to the IEEE 754 representation. This means that after the addition happens, the result, which is "correct" (as if you had infinite precision), the result is stored in a format of limited/finite precision, that rounds 4.00....001 to 4, because the error of the rounding is smaller than 1.11e-16, thus is considered acceptable.

Upvotes: 0

Wael Alshabani
Wael Alshabani

Reputation: 1675

Additional to Enigmativity's answer:

To make this works you need more precision, and that decimal with a precision of 28 to 29 digits and base of 10:

decimal x = 1e-20m, foo = 4.0m;
Console.WriteLine((x + foo)); // prints 4.00000000000000000001
Console.WriteLine((x - foo)); // prints -3.99999999999999999999 
Console.WriteLine((x + foo) == foo); // prints false.

But beware that it's true that decimal has bigger precision but has a lower range. see more about decimal here

Upvotes: 2

Enigmativity
Enigmativity

Reputation: 117084

Take a look at the MSDN reference for double: https://msdn.microsoft.com/en-AU/library/678hzkk9.aspx

It states that a double has a precision of 15 to 16 digits.

But the difference, in terms of digits, between 1e-20 and 4.0 is 20 digits. The mere act of trying to add or subtract 1e-20 to or from 4.0 simply means that the 1e-20 is lost because it cannot fit within the 15 to 16 digits of precision.

So, as far as double is concerned, 4.0 + 1e-20 == 4.0 and 4.0 - 1e-20 == 4.0.

Upvotes: 4

Related Questions