StarPinkER
StarPinkER

Reputation: 14271

Why value of double seems to be changed after assignment?

The result of the following program is a little bit strange to me on my machine.

#include <iostream>

using namespace std;

int main(){
    double a = 20;
    double b = 0.020;
    double c = 1000.0;

    double d = b * c;

    if(a < b * c)
        cout << "a < b * c" << endl;

    if(a < d)
        cout << "a < d" << endl;

    return 0;
}

Output:

$ ./test
a < b * c

I know double is not that accurate because of the precision. But I don't expect that value changed and give an inconsistent comparison result.

If the a < b * c get printed out, I do expect that a < d should also get printed. But when I run this code on my i686 server and even on my cygwin. I can see a < b * c but cannot see a < d.

This issue has been confirmed to be platform dependent. Is this caused by the different instruction and implementation of double assignment?

UPDATE

The generated assembly:

main:
.LFB1482:
    pushl   %ebp
.LCFI0:
    movl    %esp, %ebp
.LCFI1:
    subl    $56, %esp
.LCFI2:
    andl    $-16, %esp
    movl    $0, %eax
    subl    %eax, %esp
    movl    $0, -8(%ebp)
    movl    $1077149696, -4(%ebp)
    movl    $1202590843, -16(%ebp)
    movl    $1066695393, -12(%ebp)
    movl    $0, -24(%ebp)
    movl    $1083129856, -20(%ebp)
    fldl    -16(%ebp)
    fmull   -24(%ebp)
    fstpl   -32(%ebp)
    fldl    -16(%ebp)
    fmull   -24(%ebp)
    fldl    -8(%ebp)
    fxch    %st(1)
    fucompp
    fnstsw  %ax
    sahf
    ja  .L3
    jmp .L2

    //.L3 will call stdout

Upvotes: 12

Views: 1741

Answers (3)

Billy Donahue
Billy Donahue

Reputation: 584

Hypothesis: you may be seeing the effects of the 80-bit intel FPU.

With the definition double d = b * c, the quantity b * c is computed with 80-bit precision and rounded to 64-bits when it is stored into d. (a < d) would be comparing the 64-bit a to 64-bit d.

OTOH, with the expression (a < b * c), You have an 80-bit arithmetic result b * c being compared directly against a before leaving the FPU. So the b*c result never has its precision clipped by being saved in a 64-bit variable.

You'd have to look at the generated instructions to be sure, and I expect this will vary with compiler versions and optimizer flags.

Upvotes: 5

GEMISIS
GEMISIS

Reputation: 444

A quick test of the code with MinGW on my Windows machine produces these exact same results. What's really strange though is that if I change the doubles to floats, everything runs perfectly fine as it should (no output at all). However, if I change them to long doubles, both "a < b * c" and "a < d" appear.

My guess is maybe since doubles are supposed to allow for more precision, something weird is going on when multiplying the two immediate values and doing a comparison, versus storing the result for later? That would also explain why eventually the issue shows up with long doubles too since they would require even more memory space.

Upvotes: 1

abominable snowman
abominable snowman

Reputation: 101

I'm not sure what type of hardware an AS3 machine is, but for example, you can see this behavior in machines where the internal floating point unit uses larger than 64-bit floats to store intermediate results. This is the case in the x86 arch with the x87 floating point units (but not with SSE).

The problem is that processor will load in b and c to floating point registers, then do the multiplication and store the temporary result in a register. If this register is bigger than 64-bits, the result will be different than d (or a) which were computed and stored back to memory, forcing them to be 64-bits.

This is one scenario of many, you would need to look at your assembly code to determine exactly what is going on. You also need to understand how your hardware deals with floating point computations internally.

Upvotes: 3

Related Questions