Ahmed Masud
Ahmed Masud

Reputation: 612

Subtracting double gives wrong result

I am trying to get the decimal part from the double and this is my code to get the decimal part

double  decimalvalue = 23423.1234-23423.0;
0.12340000000040163

But after the subtraction I am expecting decimalvalue to be 0.1234 but I get 0.12340000000040163. Please help me to understand this behavior and if there is any workaround for it.

Upvotes: 3

Views: 5844

Answers (2)

Eric J.
Eric J.

Reputation: 150108

I suggest you have a look at

What Every Computer Scientist Should Know About Floating-Point Arithmetic

Wikipedia: IEEE 754

There are a finite number of values you can specify in a floating point number, but an infinite number of floating point numbers in the represented range.

Some floating point numbers therefore cannot be represented exactly in any floating/double style data type.

The typical way to handle your specific problem is to avoid a direct equality comparison, but rather do an epsilon test: See if the expected and computed values are within some small number (compared to the values being subtracted), called epsilon, of each other.

Indirectly related is the concept of Machine Epsilon, worth having a look at for a complete understanding

Upvotes: 8

CrazyCasta
CrazyCasta

Reputation: 28302

This is a rounding error. In base ten you cannot perfectly represent 1/3 in a given number of digits (say 15). In base 2 there are a lot more things you can not represent, 0.1234 happens to be one of them. The precision depends on the scale, but it's about 15 decimal digits for a double. I would suggest taking a look at http://en.wikipedia.org/wiki/IEEE_floating_point for more details on floating point numbers.

If you are trying to make a base 10 system (like a human used calculator for instance) and you need exact results you should use BCD.

Upvotes: 3

Related Questions