Reputation: 69
Why is float preferred for precision? Couldn't very large integers to represent the precision that float give and be deterministic across all machines? For example, an object moving 0.48124 meters in floating point could instead be represented by an object moving 48124 micrometers in int or long.
Upvotes: 1
Views: 439
Reputation: 5897
Actually, in some applications integers are preferred for a number of reasons. In particular, integers, unlike floating point, are translation-invariant:
x1 - x2 == (x1 - displacement) - (x2 - displacement)
This is very important in some geometric engines. For example, if you are computing a large grid of identical shapes determined by some parameter then you compute sets of identical parameters and for each set compute what happens in one of its representative and copy the results over to other shapes with the same parameters. Translation invariance ensures that such optimization is faithful.
Floating point, on the other hand, is NOT translation-invariant:
0.0002 - 0.0001 != (0.0002 - 1000000) - (0.0002 - 1000000) // this example in single precision
This causes sometimes very nasty surprises that are difficult to debug.
Upvotes: 0
Reputation: 81123
Although many kinds of code could use some kind of fixed values more efficiently than floating-point, different kinds of code and different situations have different requirements. Some situations require storing numbers from zero to a million accurate to one part per thousand; others require storing numbers from zero to a one accurate to one part per billion. A fixed-point format which is barely adequate for some purposes will be vastly overkill for others. If a language can efficiently support working with a numbers stored in a wide variety of different formats, fixed-point math can have some pretty huge advantages. On the other hand, it's generally easier for a language to support one to three floating-point formats than to support the many fixed-point formats that would otherwise be necessary. Further, unless a language makes it possible for a single routine to work with a variety of fixed-point formats, use of general-purpose math routines is apt to be difficult. Perhaps compiler technology has advanced to enough that using a wide variety of fixed-point types might be practical, but hardware floating-point technology has advanced enough to largely obviate the need for such a thing.
Upvotes: 0
Reputation: 153348
The degree of deterministic behavior is independent of the data representation. It just takes a longer specification to precisely define floating point math than integer math and is more confounding to implement.
IEEE floating point strives to make floating point deterministic across all machines.
Integers could be 1's or 2's compliment and various widths and thus not deterministic for certain calculations. So integer math, unto itself, can troubles.
Yes, big integers could, and have been used as OP suggests. But the F-P benefits, as @Eric Postpischil points out, are numerous. Big ints are used in select situations including cryptography.
Watch for upcoming decimal floating point standards to deal with issues like banking.
Upvotes: 0
Reputation: 222362
Floating-point is preferred over integers for some calculations because:
You asking about floating-point versus long
. A 64-bit integer might have advantages over a 32-bit floating-point format in some situations, but often the proper comparison is for comparable sizes, such as 32-bit integer to 32-bit floating-point and 64-bit integer to 64-bit floating-point. In these cases, the question is whether the benefits of a dynamic scale outweigh the loss of a few bits of precision.
Upvotes: 6
Reputation: 801
It'd be 481.24 millimeters, which is part of where the problem shows up. Using integers (or long ints), you're very likely to run into a situation where you'll encounter some sort of rounding off. Maybe your program would guarantee that the smallest units you care about are millimeters, but it still leads to a somewhat ugly standard of writing units. It's not hard to figure out that 100000 millimeters == 100 meters, but it's not instantly obvious in the way that 100.000 is, and in an application where you might deal with mostly meters or kilometers, but you still want precision, is a lot more annoying to read than 3463.823.
In addition, there are plenty of situations where you do care about sizes beyond the inconveniently small, and while with floats you can trim the number of digits you display, the data is still there, so 3.141592653 (and so on up to whatever the floating point precision is) trimmed to 3.14 meters is a lot easier to deal with than 3141592653 nanometers
Upvotes: 0