Reputation: 2726
I am using C++ to implement transfinite interpolation algorithm (http://en.wikipedia.org/wiki/Transfinite_interpolation). Everything looks good until when I was trying to test some small numbers, the results look weird and incorrect. I guess it must have something to do with loss of precision. The code is
for (int j = 0; j < Ny+1; ++j)
{
for (int i = 0; i < Nx+1; ++i)
{
int imax = Nx;
int jmax = Ny;
double CA1 = (double)i/imax; // s
double CB1 = (double)j/jmax; // t
double CA2 = 1.0-CA1; // 1-s
double CB2 = 1.0-CB1; // 1-t
point U = BD41[j]*CA2 + BD23[j]*CA1;
point V = BD12[i]*CB2 + BD34[i]*CB1;
point UV =
(BD12[0]*CB2 + BD41[jmax]*CB1)*CA2
+ (BD12[imax]*CB2 + BD23[jmax]*CB1)*CA1;
tfiGrid[k][j][i] = U + V - UV;
}
}
I guess when BD12[i]
(or BD34[i]
or BD41[j]
or BD23[j]
) is very small, the rounding error or something would accumulated and become in-negligible. Any ideas how to handle this sort of situation?
PS: Even though similar questions have been asked for millions of times. I still cannot figure out is it related to my multiplication or division or subtraction or what?
Upvotes: 1
Views: 1767
Reputation: 11831
Numerical (i.e., floating point) computation is hard to do precisely. You have to be particularly vary of substractions, is is mostly there where you lose precision. In this case the 1.0 - CA1
and such are suspect (if CA1
is very small, you'll get 1). Reorganize your expressions, the Wikipedia article (a stub!) probably has them written for understanding (showing symmetries, ...) and aestetics, not numerical robustness.
Search for courses/lecture notes on numerical computation, thy should include an introductory chapter on this. And check out Goldberg's classic What every computer scientist should know about floating point arithmetic.
Upvotes: 1
Reputation: 154047
In addition to the points that Antoine made (all very good):
it's probably worth remembering that adding two values with very
different orders of magnitude will cause a very large loss of
precision. For example, if CA1
is less than about 1E-16
,
1.0 - CA1
is probably still 1.0
, and even if it is just
a little larger, you'll loose quite a few digits of precision.
If this is the problem, you should be able to isolate it just by
putting a few print statements in the inner loop, and looking at
the values you are adding (or perhaps even with a debugger).
What to do about it is another question. There may be some
numerically stable algorithms for what you are trying to do;
I don't know. Otherwise, you'll probably have to detect the
problem dynamically, and rewrite the equations to avoid it if it
occurs. (For example, to detect if CA1
is too small for the
addition, you might check whether 1.0 / CA1
is more than
a couple of thousand, or million, or however much precision you
can afford to loose.)
Upvotes: 2
Reputation: 14104
Your algorithm doesn't appear to be accumulating errors by re-using previous computations at each iteration, so it's difficult to answer your question without looking at your data.
Basically, you have 3 options:
Increase the precision of your numbers: x86 CPUs can handle float
(32 bit), double
(64-bit) and often long double
(80-bit). Beyond that you have to use "soft" floating points, where all operations are implemented in software instead of hardware. There is a good C lib that do just that: MPFR based on GMP, GNU recommends using MPFR. I strongly recommend using easier to use C++ wrappers like boost multiprocesion. Expect your computations to be orders of magnitude slower.
Analyze where your precision loss comes from by using something more informative than a single scalar number for your computations. Have a look at interval arithmetic and the MPFI lib, based on MPFR. CADENA is another, very promising solution, based on randomly changing rounding modes of the hardware, and comes with a low run-time cost.
Perform static analysis, which doesn't even require running your code and work by analyzing your algorithms. Unfortunately, I don't have experience with such tools so I can't recommend anything beyond goggling.
I think the best course is to run static or dynamic analysis while developing your algorithm, identify your precision problems, address them by either changing the algorithm or using higher precision for the most unstable variables - and not others to avoid too much performance impact at run-time.
Upvotes: 1
Reputation: 13
The accuracy of the arithmetics built in C/C++ is limited. Of course errors will accumulate in your case.
Have you considered using a library that provides higher precision? Maybe have a look at https://gmplib.org/
A short example that clearifys the higher accuracy:
double d, e, f;
d = 1000000000000000000000000.0;
e = d + 1.0;
f = e - d;
printf( "the difference of %f and %f is %f\n", e, d, f);
This will not print 1 but 0. With gmplib the code would look like:
#include "gmp.h"
mpz_t m, n, r;
mpz_init( m);
mpz_init( n);
mpz_init( r);
mpz_set_str( m, "1 000 000 000 000 000 000 000 000", 10);
mpz_add_ui( n, m, 1);
mpz_sub( r, n, m);
printf( "the difference of %s and %s is %s (using gmp)\n",
mpz_get_str( NULL, 10, n),
mpz_get_str( NULL, 10, m),
mpz_get_str( NULL, 10, r));
mpz_clear( m);
mpz_clear( n);
mpz_clear( r);
This will return 1.
Upvotes: 1