Reputation: 5599
My apologies about the very ambiguous title.
I was refactoring some really old (c89 old) C code today and hit a really strange rounding issue.
The old code used a bunch of #define
s to declare some of the values used in a particular calculation. In refactoring the code I was going to wrap the calculation into more reusable function which accepts the values as arguments, however I hit a rather weird rounding issue under Ubuntu (not the case on Windows).
It might be easier to explain with an example:
#include <stdio.h>
#define SOMEVAR 0.001
void test(double value) {
int calc1 = (int)(1.0 / SOMEVAR); // using the #define directly (so an in-place 0.001)
int calc2 = (int)(1.0 / value); // using the parameter value
printf("#define: %d\n", calc1); // prints 1000, as expected
printf("param: %d\n", calc2); // prints 999 on Ubuntu and 1000 on Windows
}
int main(int argc, char *argv[]) {
test(SOMEVAR);
}
Compiled on Ubuntu with the following command
gcc -std=c99 -o test test.c
I know there's loss of precision when it comes to floating-point arithmetic, but surely this is a problem that can be worked around? I'd really like to encapsulate the calculations into a re-usable function but with this loss of precision in switching from #define
s to function arguments the calculations are all going to be incorrect.
As an example of what I mean, here's an excerpt of one point in the code where this makes a huge difference:
#define DT 0.001
// -- snip
int steps = (int)(1.0 / DT); // evaluates to 1000
for(int i = 0; i < steps; ++i)
// do stuff
vs
void calculate(double dt) {
int steps = (int)(1.0 / dt); // evaluates to 999
for(int i = 0; i < steps; ++i)
// do stuff
}
As you can see, the function-ized version will iterate one less time than the #define
version, which means the results will never match.
Has anyone else come up against this problem? Is there a solution or should I just stop fighting the #define
, suck it up and work around it?
EDIT: This happens when using gcc
or g++
(my refactored version will be written in C++, rather than c99 C, I just used C in this example for simplicity).
Upvotes: 1
Views: 64
Reputation: 158469
This looks like constant folding going on here, if we look at the godbolt output for your first set of code, we can see that the first calculation is boiled down to a constant:
movl $1000, %esi #,
so in this example the compiler is performing the calculation during translation since both values are constants and it knows the expression is really just:
1.0 / 0.001
whereas in the second case since both values are not constants the compiler evaluates at run-time:
divsd %xmm0, %xmm1 # value, D.1987
cvttsd2si %xmm1, %esi # D.1987, calc2
So unfortunately the calculation are not equivalent and can in some cases can lead to different results, although I can not reproduce the results you are seeing on any online compiler yet.
If you are going to refactor to C++ and can use C++11 then you can always use constexpr
to get compile time evaluation:
constexpr double SOMEVAR = 0.001 ;
//....
constexpr int calc1 = (int)(1.0 / SOMEVAR );
Upvotes: 3