Reputation: 12151
I sometimes do the following in my code when I am being lazy:
double d = 0;
...
if( condition1 ) d = 1.0;
...
if (d == 0) { //not sure if this is valid
}
can I always count on the fact that if I set a double
to be 0
I can later check in the code for it with d == 0
reliably?
I understand that if I do computations on d
in the above code, I can never expect d
to be exactly 0
but if I set it, I would think I can. All my testing so far on various systems seem to indicate that this is legitimate.
Upvotes: 1
Views: 231
Reputation: 2084
I would avoid it. While the comparison would almost certainly work in the example you gave, it could fall apart unexpectedly when you use values that cannot be exactly expressed using a floating point variable. Where a value cannot be exactly expressed in a floating point variable some precision may be lost when copying from the coprocessor to memory, which would not match a value that stayed in the coprocessor the whole time.
My preference would be to add a range e.g. if (fabs(d)<0.0001) or keep the information in a boolean.
Edit: From wikipedia: "For example, the decimal number 0.1 is not representable in binary floating-point of any finite precision; the exact binary representation would have a "1100" sequence continuing endlessly:"
So if you had if (d==0.1) that has the potential to create a big mess - are you sure you will never have to change the code in this way?
Upvotes: 1
Reputation: 477512
Yes, for most cases (i.e. excluding NaN): After the assignment a = b;
the condition a == b
is true. So as long as you're only assigning and comparing to the thing you assign from, this should be reliable, even for floating-point types.
Upvotes: 4