Reputation: 2441
HI,
I developed some mixed C/C++ code, with some intensive numerical calculations. When compiled in Linux and Mac OS X I get very similar results after the simulation ends. In Windows the program compiles as well but I get very different results and sometimes the program does not seem to work.
I used GNU compilers in all systems. Some friend recommend me to add -frounding-math and now the windows version seems to work more stable, but Linux and Os X, their results, do not change at all.
Could you recommend another options to get more concordance between Win and Linux/OSX versions?
Thanks
P.D. I also tried -O0 (no optimizations) and specified -m32
Upvotes: 7
Views: 8565
Reputation: 3482
The IEEE and C/C++ standards leave some aspects of floating-point math unspecified. Yes, the precise result of adding to floats is determined, but any more complicated calculation is not. For instance, if you add three floats then the compiler can do the evaluation at float precision, double precision, or higher. Similarly, if you add three doubles then the compiler may do the evaluation at double precision or higher.
VC++ defaults to setting the x87 FPUs precision to double. I believe that gcc leaves it at 80-bit precision. Neither is clearly better, but they can easily give different results, especially if there is any instability in your calculations. In particular 'tiny + large - large' may give very different results if you have extra bits of precision (or if the order of evaluation changes). The implications of varying intermediate precision are discussed here:
http://randomascii.wordpress.com/2012/03/21/intermediate-floating-point-precision/
The challenges of deterministic floating-point are discussed here:
http://randomascii.wordpress.com/2013/07/16/floating-point-determinism/
Floating-point math is tricky. You need to find out when your calculations diverge and examine the generated code to understand why. Only then can you decide what actions to take.
Upvotes: 0
Reputation: 96141
There are four different types of rounding for floating-point numbers: round toward zero, round up, round down, and round to the nearest number. Depending upon compiler/operating system, the default may be different on different systems. For programmatically changing the rounding method, see fesetround
. It is specified by C99 standard, but may be available to you.
You can also try -ffloat-store
gcc option. This will try to prevent gcc from using 80-bit floating-point values in registers.
Also, if your results change depending upon the rounding method, and the differences are significant, it means that your calculations may not be stable. Please consider doing interval analysis, or using some other method to find the problem. For more information, see How Futile are Mindless Assessments of Roundoff in Floating-Point Computation? (pdf) and The pitfalls of verifying floating-point computations (ACM link, but you can get PDF from a lot of places if that doesn't work for you).
Upvotes: 9
Reputation: 4004
In addition to the runtime rounding settings that people mentioned, you can control the Visual Studio compiler settings in Properties > C++ > Code Generation > Floating Point Model. I've seen cases where setting this to "Fast" may cause some bad numerical behavior (e.g. iterative methods fail to converge).
The settings are explained here: http://msdn.microsoft.com/en-us/library/e7s85ffb%28VS.80%29.aspx
Upvotes: 1
Reputation: 8822
I can't speak to the implementation in Windows, but Intel chips contain 80-bit floating point registers, and can give greater precision than that specified in the IEEE-754 floating point standard. You can try calling this routine in the main() of your application (on Intel chip platforms):
inline void fpu_round_to_IEEE_double()
{
unsigned short cw = 0;
_FPU_GETCW(cw); // Get the FPU control word
cw &= ~_FPU_EXTENDED; // mask out '80-bit' register precision
cw |= _FPU_DOUBLE; // Mask in '64-bit' register precision
_FPU_SETCW(cw); // Set the FPU control word
}
I think this is distinct from the rounding modes discussed by @Alok.
Upvotes: 10