DanielHsH
DanielHsH

Reputation: 4453

How to force 32bits floating point calculation consistency across different platforms?

I have a simple piece of code that operates with floating points. Few multiplications, divisions, exp(), subtraction and additions in a loop. When I run the same piece of code on different platforms (like PC, Android phones, iPhones) I get slightly different results. The result is pretty much equal on all the platforms but has a very small discrepancy - typically 1/1000000 of the floating point value.

I suppose the reason is that some phones don't have floating point registers and just simulate those calculations with integers, some do have floating point registers but have different implementations. There are proofs to that here: http://christian-seiler.de/projekte/fpmath/

Is there a way to force all the platform to produce a consistent results? For example a good & fast open-source library that implements floating point mechanics with integers (in software), thus I can avoid hardware implementation differences.

The reason I need an exact consistency is to avoid compound errors among layers of calculations. Currently those compound errors do produce a significantly different result. In other words, I don't care so much which platform has a more correct result, but rather want to force consistency to be able to reproduce equal behavior. For example a bug which was discovered on a mobile phone is much easier to debug on PC, but I need to reproduce this exact behavior

Upvotes: 2

Views: 1293

Answers (2)

Mats Petersson
Mats Petersson

Reputation: 129314

32-bit floating point math, for a given calculation will, at best, have a precision of 1 in 16777216 (1 in 224). Functions such as exp are often implemented as a sequence of calculations, so may have a larger error due to this. If you do several calculations in a row, the errors will add and multiply up. In general float has about 6-7 digits of precision.

As one comment says, check the rounding mode is the same. Most FPU's have a "round to nearest" (rtn), "round to zero" (rtz) and "round to even" (rte) mode that you can choose. The default on different platforms MAY vary.

If you perform additions or subtractions of fairly small numbers to fairly large numbers, since the number has to be normalized you will have a greater error from these sort of operations.

Normalized means shifted such that both numbers have the decimal place lined up - just like if you do that on paper, you have to fill in extra zeros to line up the two numbers you are adding - but of course on paper you can add 12419818.0 with 0.000000001 and end up with 12419818.000000001 because paper has as much precision as you can be bothered with. Doing this in float or double will result in the same number as before.

There are indeed libraries that do floating point math - the most popular being MPFR - but it is a "multiprecision" library, but it will be fairly slow - because they are not really built to be "plugin replacement of float", but a tool for when you want to calculate pi with 1000s of digits, or when you want to calculate prime numbers in the ranges much larger than 64 or 128 bits, for example.

It MAY solve the problem to use such a library, but it will be slow.

A better choice would, moving from float to double should have a similar effect (double has 53 bits of mantissa, compared to the 23 in a 32-bit float, so more than twice as many bits in the mantissa). And should still be available as hardware instructions in any reasonably recent ARM processor, and as such relatively fast, but not as fast as float (FPU is available from ARMv7 - which certainly what you find in iPhone - at least from iPhone 3 and the middle to high end Android devices - I managed to find that Samsung Galaxy ACE has an ARM9 processor [first introduced in 1997] - so has no floating point hardware).

Upvotes: 1

janneb
janneb

Reputation: 37188

One relatively widely used and high quality software FP implementation is MPFR. It is a lot slower than hardware FP, though.

Of course, this won't solve the actual problems your algorithm has with compound errors, it will just make it produce the same errors on all platforms. Probably a better approach would be to design an algorithm which isn't as sensitive to small differences in FP arithmetic, if feasible. Or if you go the MPFR route, you can use a higher precision FP type and see if that helps, no need to limit yourself to emulating the hardware single/double precision.

Upvotes: 3

Related Questions