Ahmed A.Hamid
Ahmed A.Hamid

Reputation: 278

Why does normalization address precision problems in comparing doubles?

For the problem of computing the real square root of any given double value, the authors of Elements of Programming Interviews present a solution in which the difference between 2 double values a and b is computed as follows:

typedef enum { SMALLER, EQUAL, LARGER } Ordering;

Ordering Compare(double a, double b) {
  // Uses normalization for precision problem.
  double diff = (a - b) / b;
  return diff < -numeric_limits<double>::epsilon()
             ? SMALLER
             : diff > numeric_limits<double>::epsilon() ? LARGER : EQUAL;
}

My questions are:

  1. What is the point of dividing by b? And how does this address "precision problems"?

  2. I do not understand the significance of epsilon() in essence; the concept of the smallest value greater than 1. Why does it have its own constant in numeric_limits to begin with?

Upvotes: 2

Views: 737

Answers (2)

Ahmed Ebaid
Ahmed Ebaid

Reputation: 417

The following post might give you some clear idea on why epsilon comparison is needed for your problem: https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/

Upvotes: -1

Netch
Netch

Reputation: 4572

What is the point of dividing by b

A number consists of mantissa and exponent (and sign, positive for sqrt). The sqrt function divides exponent by 2 and gets square root from mantissa. For example, sqrt(4e20) == 2e10; sqrt(4e-100) == 2e-50, and so on. Absolute error for them, considering IEEE754 double, is, respectively, near 1e-7 and 1e-67. So, any fixed absolute error is useless for comparing iterations; only relative error has sense. Dividing by b converts approximate absolute error to relative one.

One could use an alternative approach: split a source value into product of small value (e.g. between 1 and 4) and exponent; the latter would be restored to sqrt result at final. With such "normalized" input value, one can compare sequential root approximations with an absolute error.

I do not understand the significance of epsilon() in essence; the concept of the smallest value greater than 1.

Not "the smallest value greater than 1", but difference between 1 and the former.

Using epsilon here suggests the approximate is one of two the closest ones (a smaller and a greater one) to the ideal rounded value. That is simpler than checking of the case precision doesn't increase but approximates fall into eternal loop.

Upvotes: 2

Related Questions