Reputation: 85
In a previous project of mine (which I posted a question about here), I was very annoyed by the realization that I would have to deal with rounding errors in floating point numbers. The rounding error was an issue to me because I was doing collision detection, and allowing the rounding error to round up meant that I was allowing objects to collide. There are a couple solutions I see to this, but require guesswork type solutions.
A) One could ignore small collisions, but how do you know how small of a collision to expect? The nature of floating points makes rounding error/precision hard to predict, other than that you know the precision gets smaller for the fractional portion as the number gets farther from 0.
B) One could automatically subtract a small number to make sure floating point rounding errors generally don't cause issues. This solution has the same problem as the solution above.
I decided to go for solution B as it was easier to implement. I do admit that rounding error could be wrangled in somewhat by translating to local space before doing such calculations, but it doesn't feel like an elegant solution to me.
I then began to think about another solution, which has been culminating into a complete idea since then. Why aren't integers/longs used as fractions? The advantage this brings is that it would have predictable stepping and rounding error. An int can only go up or down by 1, so you would only ever have to worry about that kind of stepping. This also would make various behaviors more predictable in general. You wouldn't have to think of the type of input your code is receiving, as behavior wouldn't change as numbers get further from 0. Also, if I remember correctly from my limited research, floating points can take loads of CPU cycles, while I think ints don't have such issues.
Let me explain a little bit deeper what I mean...
0b00000000000000001111111111111111 represents a binary number that would usually be a 32 bit integer. But what if the 1's in that binary number represented the fractional portion of a number, so that 0b0000000000000000.1111111111111111 would represent a number very very close to 1. This scheme would give an unsigned short level of precision for the fractional part of the number, and a signed short level of precision for the whole part of the number (yeah, kinda low). If a long was used to represent the fractional number here, the precision on both sides would be integer level, which is often more than enough for most needs. The number of bits used to represent the fractional portion of the number could be variable as well, depending on what you need, aka 48 bits for the whole number and 16 bits for the fraction.
There probably could be a code implementation of this, but a hardware implementation would be cool and could be more efficient. The variable bits of precision for the fraction could be difficult, but the rest sounds very doable to me. Tell me if I am wrong on the assumptions I have made, and if there is a reason this doesn't exist already(or if it does, what is it called)? I can't be the first person to think of this. In case it wasn't obvious, I come
Upvotes: 0
Views: 76
Reputation: 41513
What you're describing is known as "fixed point", and it's used quite frequently. Some issues, though:
10^10 - 1
, but neither would a 32-bit fixed point number (because it wouldn't be able to represent at least one of the values). Likewise, 1/3 wouldn't be any more exact in fixed point than it would in floating point. Floating point error comes from gaps between floating point numbers, but fixed point numbers have gaps too, and the median gap between any two consecutive floating point numbers is roughly the same as the median gap between any two consecutive fixed point numbers (assuming that their median values are also roughly the same, which is the case in your example).
As for hardware implementations: You already have one! All arithmetic operations on fixed point types can be emulated straightforwardly with integers. (Indeed, addition and subtraction can be used without any changes.) Things like square roots and trigonometric functions get a little hairier, but as long as you have a floating point type with a mantissa big enough to store one of your fixed point numbers (e.g. a double
to store a 32-bit fixed point number) you can leverage that hardware as well.
You also mentioned the use of variably sized types. Those can come in handy in certain cases, but they're not common: IME, required ranges for datatypes in numerical applications tend to be either "not that big" or "basically infinite". And you won't be able to store 1/3 no matter how big your fraction gets.
Upvotes: 1