Reputation: 47
I have seen many questions and responses on Stack Overflow on the representation of floating point numbers, which target the difference in rounding "different numbers".
I'm testing an engine written in Fortran which solves a nonlinear system iteratively. I have observed inconsistency in the final results (sometimes divergence) starting from identical initial conditions.
I'm told that rounding the same number itself is random depending on system resources.
As a trivial example, I have evaluated 0.1 + 0.2 in C# in a loop for many times, but I always get 0.30000000000000004.
So, is it possible to get different rounded numbers for the same floating point number depending on the system resources status or any other factor?
Upvotes: 2
Views: 457
Reputation: 222763
Rounding in floating-point operations is deterministic in IEEE 754 and in common floating-point implementations that do not fully conform to IEEE 754.
The default rounding rule for results within finite bounds of the floating-point format being used is that the floating-point result of an operation is the number you would get by performing the operation with exact real-number arithmetic (“infinitely precise”) and then selecting the number in S that is closest to that exact result, where S is the set of all numbers representable in the destination format. If there is a tie, the number with the even low digit in the significand is chosen. (The significand is the fraction portion of a floating-point representation; in the floating-point form ±f•be, f is the significand.)
(To handle results outside the finite bounds, S is treated as if it included two additional numbers, one just above the largest representable finite value, in the position where it would be if the exponent range continued, and the negation of that. If the rounding selects one of those numbers, the result of the floating-point operation is +∞ or −∞, correspondingly. Also, for esoteric cases in which the rule about the even low digit fails to distinguish which result to select, the tied number with larger magnitude is chosen. This applies only for one-digit formats, as when converting 9.5 to a requested output format with only one digit, which must produce +9•100 or +1•101.)
There are other rules besides this default, such as choosing the least element in S that is not greater than the exact result (round downward), choosing the greatest that is not less (round upward), choosing the result with the greatest magnitude not exceeding the magnitude of the exact result (round toward zero), always rounding to an odd low bit if the exact result is not representable (round to odd).
All of these rounding functions are deterministic; they require one specific result for any operation; they do not produce different results when the same operation with the same operands is performed at different times. They are also weakly monotonic. (x < y implies rounding(x) ≤ rounding(y) and similarly for >.)
There are various sources of non-determinism in floating-point software. One is multi-threaded software that assigns subtasks to different threads and that coalesces the results of those threads in ways that depend on system performance.
Upvotes: 4
Reputation: 126
No, not really. Floating point numbers store values with exponents. Some of the bits of the number represent the base, and some of the bits represent the power. For instance, rather than storing 4294967296, a float would store it as 2^32 (simplification). You'll lose precision when using decimals because C# will use exponents to store the number. Performing many operations on a float will change how it rounds, like shown above, but performing the same operations on the same number should always result in the same rounding error.
Upvotes: 0