Reputation: 66583
If I run a complex calculation involving System.Double
on .NET under Windows (x86 and x64) and then on Mono (Linux, Unix, whatever), am I absolutely guaranteed to get exactly the same result in all cases, or does the specification allow for some leeway in the calculation?
Upvotes: 13
Views: 631
Reputation: 108810
No its not the same. It might compile to x87 or SSE instructions which work differently(for example regarding denorm support). I found no way to force .net to use reproducible floating point math.
There are some alternatives, but all of them are slow and some are a lot of work:
Log
and Sqrt
will be slow. If you want I can dig out my unfinished code for this.Decimal
is implemented in software and thus probably reproducible too, but it isn't fast either.Upvotes: 5
Reputation: 239724
I do not believe so. Such phrases as:
The size of the internal floatingpoint representation is implementation-dependent, can vary, and shall have precision at least as great as that of the variable or expression being represented
and:
This design allows the CLI to choose a platform-specific high-performance representation for floating-point numbers until they are placed in storage locations. For example, it might be able to leave floating-point variables in hardware registers that provide more precision than a user has requested. At the same time, CIL generators can force operations to respect language-specific rules for representations through the use of conversion instructions
from section 12.1.3 of MS Partition I would tend to indicate that rounding differences might occur if all operations occur within the internal representation
Upvotes: 0
Reputation: 108957
From MSDN
In addition, the loss of precision that results from arithmetic, assignment, and parsing operations with Double values may differ by platform. For example, the result of assigning a literal Double value may differ in the 32-bit and 64-bit versions of the .NET Framework
Hope that helps.
Upvotes: 14