Reputation: 4405
I am implementing a Runge–Kutta procedure, which includes several time-critical multiplications with fixed, complicated fractions (which are not magic numbers but inherent to the algorithm) and I want this multiplications to be performed as efficient as possible whilst keeping the code readable.
For simplicity’s sake, let’s assume my code would look like the following, if I did not need to care about efficiency:
for (int i = 0; i < n; i++)
a[i] += f(i) + b[i] * (2197/4104.);
Can I assume that every reasonable compiler (with optimisation) will effectively replace 2197/4104 by 0.535331…? If not, what are good ways to ensure this? Would defining a const double
suffice, for example?
(Note that I am not interested in other possibilities for optimisation of the above code – it’s really just an example.)
Upvotes: 4
Views: 1359
Reputation: 41757
Using any recent compiler, the evaluation will be done at compile time.
However, if you cannot guarantee that the compiler will, simply lift the calculation out of the loop (making a const long double
if possible):
long double fraction = (2197/4104.);
for (int i = 0; i < n; i++)
a[i] += f(i) + b[i] * fraction;
If accuracy of summation is important and the size of f(i)
or b[i]
is potantially large (I assume it could be), you're better off not using +=
to sum the values, instead look at the Kahan summation algorithm to sum, with minimal loss of precision. Alternatively, try to work with integral types while summing and then performing the division as a final step.
Upvotes: 4
Reputation: 35388
In order to skip the floating point arithmetics you always can multiply your numbers by 100000 (I just looked at the Runge-Kutta on wiki, that's from where the number is) and do the entire algorithm on integers and when you actually need the result divide again with 100000. You use 2197/4104 as 53533, and you calculate for the others (28561/56430 = 0.50435 -> 50435, etc ...) too.
Upvotes: 0