Reputation: 25128
After upgrading C++ project to Visual studio 2013, the result of the program has changed because of different floating point behavior of the new VC compiler. The floating model is set to /fp:precise
In Visual Studio 2008(v9.0)
float f = 0.4f; //it produce f = 0.400000001
float f6 = 0.400000006f; //it produce f = 0.400000001
In Visual Studio 2013 (v12.0)
float f = 0.4f; //it produce f = 0.400000006
float f1 = 0.40000001f; //it produce f1 = 0.400000006
The setting for the project is identical (converted).
I understand that there is a kind of liberty in floating point model, but I don't like that certain things has changed and the poeple working with old/new version of Visual Studio can't reproduce some bugs being reported by another developers. Is there any setting which can be changed to enforce the same behavior across different version of Visual Studio?
I tried to set platform toolset to vs90 and it still produce 0.400000006 in VS2013.
I tracked the hexadicimal values in Memory Window. The hexadecimal values, f1, f1 and f6 are all the same. There is a difference in displaying these float values in the watch window. Futhermore the problem is still with float 0.3f. Multiplying same decimal values gives different result.
In Visual Studio 2008(v9.0)
float a = 0.3f; //memory b8 1e 85 3e 00 00 40 40, watch 0.25999999
float b = 19400;
unsigned long c = (unsigned long)((float)b * a); //5043
In Visual Studio 2013 (v12.0)
float a = 0.3f; //memory b8 1e 85 3e 00 00 40 40, watch 0.259999990
float b = 19400;
unsigned long c = (unsigned long)((float)b * a); //5044
Upvotes: 2
Views: 2038
Reputation: 941585
The behavior is correct, the float type can store only 7 significant digits. The rest are just random noise. You need to fix the bug in your code, you are either displaying too many digits, thus revealing the random noise to a human, or your math model is losing too many significant digits and you should be using double instead.
There was a significant change in VS2012 that affect the appearance of the noise digits. Part of the code generator changes that implement auto-vectorization. The 32-bit compiler traditionally used the FPU for calculations. Which is very notorious for producing different random noise, calculations are performed with an 80-bit intermediate format and get truncated when stored back to memory. The exact moment when this truncation occurs can be unpredictable due to optimizer choices, thus producing different random noise.
The code generator, like it already did for 64-bit code, now uses SSE2 instructions instead of FPU instructions. Which produce more consistent results that are not affected by the code optimizer choices, the 80-bit intermediate format is no longer used. A backgrounder on the trouble with the FPU is available in this answer.
This will be the behavior going forward, the FPU will never come back again. Adjust your expectations accordingly, there's a "new normal". And fix the bugs.
Upvotes: 4