Reputation: 22245
I was calculating projections of normalized 2D points and accidentally I noticed that they were more accurate than when projecting the points without normalizing them. My code is in c++ and I compile with NDK for an android mobile which lacks of FPU (floating point unit).
Why do I gain accuracy in calculations with C++ when I first normalize the values so they are between 0 and 1?
Is it generally true in C++ that you gain accuracy in arithmetic if you work with variables that are between 0 and 1 or is it related to the case of compiling for an ARM device?
Upvotes: 6
Views: 682
Reputation: 70546
If you do matrix based calculations, you might want to compute the condition numbers of your matrices. Essentially, the condition number measures the size of the numerical error in your answer as a function of the size of the numerical error in your inputs. A problem with a low condition number is said to be well-conditioned, while a problem with a high condition number is said to be ill-conditioned.
For some problems, you can preprocess your data (e.g. rescaling the units in which you measure certain variables) so that the condition number becomes more favorable. E.g. a financial spreadsheet in which some columns are measured in dollar cents and others in billions of dollars is ill-conditioned.
See Wikipedia for a thorough explanation.
Upvotes: 3
Reputation: 882146
You have a misunderstanding of precision. Precision is basically the number of bits available to you for representing the mantissa of your number.
You may find that you seem to have more digits after the decimal point if you keep the scale between 0 and 1 but that's not precision, which doesn't change at all based on the scale or sign.
For example, single precision has 23 bits of precision whether your number is 0.5 or 1e38. Double precision has 52 bits of precision.
See this answer for more details on IEEE754 bit-level representation.
Upvotes: 9