Reputation: 106
Given two positive floating point numbers x and y, how would you compute x/y to within a specified tolerance e if the division operator cannot be used?
You cannot use any library functions, such as log and exp; addition and multiplication are acceptable.
May I know how can I solve it? I know the approach to solving division is to use bitwise operator, but in that approach, when x is less than y, the loop stops.
def divide(x, y):
# break down x/y into (x-by)/y + b , where b is the integer answer
# b can be computed using addition of numbers of power of 2
result = 0
power = 32
y_power = y << power
while x >= y:
while y_power > x:
y_power = y_power>> 1
power -= 1
x = x - y_power
result += 1 << power
return result
Upvotes: 0
Views: 2198
Reputation: 24464
First, you should separate signs and exponents from the both numbers. After that, we'll divide pure positive mantissas and adapt the result using former exponents and signs.
As for dividing mantissas, it is simple, if you'll remember that division is not only inverted multiplication, but also the many-times done substraction. The number of times is the result.
A:B->C, precision e
C=0
allowance= e*B
multiplicator = 1
delta = B
while (delta< allowance && A>0)
if A<delta {
multiplicator*=0.1 // 1/10
delta*=0.1 // 1/10
} else {
A-=delta;
C+=multiplicator
}
}
Really, we can use any number>1 instead of 10. It would be interesting, which will give the most effectivity. Of course, if we use 2, we can use shift instead of multiplication inside the cycle.
Upvotes: 0
Reputation: 4455
Likely most straightforward solution is to probably to use Newton's method for division to compute the reciprocal, which may then be multiplied by the numerator to yield the final result.
This is an iterative process gradually refining an initial guess and doubling the precision on every iteration, and involves only multiplication and addition.
One complication is generating a suitable initial guess, since an improper selection may fail to converge or take a larger number of iterations to reach the desired precision. For floating-point numbers the easiest solution is to normalize for the power-of-two exponent and use 1
as the initial guess, then invert and reapply the exponent separately for the final result. This yields roughly 2^iteration
bits of precision, and so 6 iterations should be sufficient for a typical IEEE-754 double with a 53-bit mantissa.
Computing the result to within an absolute error tolerance e
is difficult however given the limited precision of the intermediate computations. If specified too tightly it may not be representable and, worse, a minimal half-ULP bound requires exact arithmetic. If so you will be forced to manually implement the equivalent of an exact IEEE-754 division function by hand while taking great care with rounding and special cases.
Below is one possible implementation in C:
double divide(double numer, double denom, unsigned int precision) {
int exp;
denom = frexp(denom, &exp);
double guess = 1.4142135623731;
if(denom < 0)
guess = -guess;
while(precision--)
guess *= 2 - denom * guess;
return ldexp(numer * guess, -exp);
}
Handling and analysis of special-cases such as zero, other denormals, infinity or NaNs is left as an exercise for the reader.
The frexp
and ldexp
library functions are easily substituted for manual bit-extraction of the exponent and mantissa. However this is messy and non-portable, and no specific floating-point representation was specified in the question.
Upvotes: 0
Reputation:
An option is to use the Newton-Raphson iterations, known to converge quadratically (so that the number of exact bits will grow like 1, 2, 4, 8, 16, 32, 64).
First compute the inverse of y
with the iterates
z(n+1) = z(n) (2 - z(n) y(n)),
and after convergence form the product
x.z(N) ~ x/y
But the challenge is to find a good starting approximation z(0)
, which should be within a factor 2
of 1/y
.
If the context allows it, you can play directly with the exponent of the floating-point representation and replace Y.2^e
by 1.2^-e
or √2.2^-e
.
If this is forbidden, you can setup a table of all the possible powers of 2 in advance and perform a dichotomic search to locate y
in the table. Then the inverse power is easily found in the table.
For double precision floats, there are 11
exponent bits so that the table of powers should hold 2047
values, which can be considered a lot. You can trade storage for computation by storing only the exponents 2^0
, 2^±1
, 2^±2
, 2^±3
... Then during the dichotomic search, you will recreate the intermediate exponents on demand by means of products (i.e. 2^5 = 2^4.2^1
), and at the same time, form the product of inverses. This can be done efficiently, using lg(p)
multiplies only, where p=|lg(y)|
is the desired power.
Example: lookup of the power for 1000
; the exponents are denoted in binary.
1000 > 2^1b = 2
1000 > 2^10b = 4
1000 > 2^100b = 16
1000 > 2^1000b = 256
1000 < 2^10000b = 65536
Then
1000 < 2^1100b = 16.256 = 4096
1000 < 2^1010b = 4.256 = 1024
1000 > 2^1001b = 2.256 = 512
so that
2^9 < 1000 < 2^10.
Now the Newton-Raphson iterations yield
z0 = 0.001381067932
z1 = 0.001381067932 x (2 - 1000 x 0.001381067932) = 0.000854787231197
z2 = 0.000978913251777
z3 = 0.000999555349049
z4 = 0.000999999802286
z5 = 0.001
Upvotes: 3