Reputation: 33
I understand that the theory of binary numbers, so operation of double numbers is not precise. However, in java, I have no idea why "(double)65 / 100" is 0.65, which is completely correct in decimal number, other than 0.6500000000004.
double a = 5;
double b = 4.35;
int c = 65;
int d = 100;
System.out.println(a - b); // 0.6500000000000004
System.out.println((double) c / d); // 0.65
Upvotes: 3
Views: 300
Reputation: 462
Java completely messes up has its own way of handling floating-point binary to decimal conversions.
A simple program in C (compiled with gcc
) gives the result:
printf("1: %.20f\n", 5.0 - 4.35); // 0.65000000000000035527
printf("2: %.20f\n", 65./100); // 0.65000000000000002220
while Java gives the result (note you only needed 17 digits to see it, but I'm trying to make it more clear):
System.out.printf("%.20f\n", 5.0 - 4.35); // 0.65000000000000040000
System.out.printf("%.20f\n", 65./100); // 0.65000000000000000000
But when using the %a
format specifier, both languages printf
the underlying hexadecimal (correct) value: 0x1.4ccccccccccd00000000p-1
.
So, Java is performing some illegal rounding at some point in the code. The apparent issue here is that Java has a different set of rules to convert binary to decimal, from the Java specification:
The number of digits in the result for the fractional part of m or a is equal to the precision. If the precision is not specified then the default value is 6. If the precision is less than the number of digits which would appear after the decimal point in the string returned by Float.toString(float) or Double.toString(double) respectively, then the value will be rounded using the round half up algorithm. Otherwise, zeros may be appended to reach the precision. For a canonical representation of the value, use Float.toString(float) or Double.toString(double) as appropriate. (emphasis mine)
And in the toString
specification:
How many digits must be printed for the fractional part of m or a? There must be at least one digit to represent the fractional part, and beyond that as many, but only as many, more digits as are needed to uniquely distinguish the argument value from adjacent values of type double. That is, suppose that x is the exact mathematical value represented by the decimal representation produced by this method for a finite nonzero argument d. Then d must be the double value nearest to x; or if two double values are equally close to x, then d must be one of them and the least significant bit of the significand of d must be 0. (emphasis mine)
So, Java does perform a different binary to decimal conversion from C, but it remains closer to the true binary value than to any other, so the spec guarantees that the binary value can be restored back by a decimal to binary conversion.
Professor William Kahan warned about some Java floating-point issues in this article:
How Java’s Floating-Point Hurts Everyone Everywhere
But this conversion behaviour seems to be IEEE-complaint.
EDIT: I have included information provided by @MarkDickinson in the comments, to report that this Java behaviour, albeit different from C, is documented, and is IEEE-compliant. This has already been explained here, here, and here.
Upvotes: 1