Reputation: 75
Here is a function to approximate log10(x+1) for (x+1) < ~1.2:
a = 1.097
b = 0.085
c = 2.31
ans = 1 / (a - b*x + c/x)
It should look like that:
It works by adjusting harmonic mean to match log10, but the problem is in values of a, b, c.
The question is how to get just right a, b and c and how to make better approximation.
I made this code that can give a pretty good approximation for a, b, c, but my code wasn't able to make it any better.
import numpy as np
a = 1
b = 0.01
c = 2
def mlg(t):
x = t
if t == 0:
x = 0.00000001
x2 = x*x
o = a - (b * x) + (c / x)
return 1/o
def mlg0(t):
x = t
if t == 0:
x = 0.00000001
x2 = x*x
o = a - (b * x) + (c / x)
return o
for i in range(9000):
n1 = np.random.uniform(0,1.19,1000)
for i in range(1000):
n = n1[i]
o = np.log10(n+1)
u = mlg(n) - o
e = u ** 2
de_da = 0 - 2 * (u) / (mlg0(n) ** 2)
de_db = de_da * n
de_dc = de_da / n
a -= de_da * 0.00001
b -= de_db * 0.00001
c -= de_dc * 0.00001
print(a,b,c)
How could the code be changed to generate better values?
I've used a method alike back propagation in NN, but it could not give me values any better.
Here is how the error is calculated:
Upvotes: 2
Views: 146
Reputation: 6310
Here are two approaches.
Method 1: series expansion in x (better for negative and positive x)
Method 2: fit the curve that passes through 3 points (here, x=0, ½ and 1)
Method 1.
If you expand them by Taylor series as powers of x then
Equating coefficients of x, x^2 and x^3 gives
In code:
import math
import numpy as np
import matplotlib.pyplot as plt
c = math.log( 10 )
a = c / 2
b = c / 12
print( "a, b, c = ", a, b, c )
x = np.linspace( -0.2, 0.2, 50 )
y = np.log10( 1 + x )
fit = x / ( a * x - b * x ** 2 + c )
plt.plot( x, y , 'b-', label="Original" )
plt.plot( x, fit, 'ro', label="Fit" )
plt.legend()
plt.show()
Output:
a, b, c = 1.151292546497023 0.19188209108283716 2.302585092994046
Method 2.
Fit to three points. Here we require
If we require this to fit at x=0, ½ and 1 we get (including L’Hopital’s rule for the limit at x=0)
This time I have used your interval x in [0,1] to plot the fit
import math
import numpy as np
import matplotlib.pyplot as plt
c = math.log( 10 )
a = c * ( 2/math.log(1.5) - 1/math.log(2) - 3 )
b = 2 * c * ( 1/math.log(1.5) - 1/math.log(2) - 1 )
print( "a, b, c = ", a, b, c )
x = np.linspace( 0.0, 1.0, 50 )
y = np.log10( 1 + x )
fit = x / ( a * x - b * x ** 2 + c )
plt.plot( x, y , 'b-', label="Original" )
plt.plot( x, fit, 'ro', label="Fit" )
plt.legend()
plt.show()
Output:
a, b, c = 1.1280638006656465 0.1087207987723298 2.302585092994046
Upvotes: 3
Reputation: 2649
The Taylor series is almost never the right way to do it unless the series that you obtain is very well behaved and strongly convergent (eg exp(x)). Log(1+x) is neither with alternating very slowly decreasing coefficients.
Since log10 is just natural log with a scaling factor I'll treat that.
log10(1+x) = log(1+x)/log(10)
The optimum approximation for most functions is typically a rational polynomial. The Pade approximants to match the first 3 terms of the Taylor series for log(1+x) are starting from the original Taylor series:
[3,0] x-x^2/2 + x^3/3
[2,1] x(6+x)/(6x+4)
[1,2] 12x/(12+6x-x^2) = 1/(1/x+1/2-x/12)
Of these options (OP's is equivalent to the last one) give or take the scale factor of log(10) the middle expression is by far the best simple rational Pade approximation for log(1+x). Both of the equations with a non trivial divisor benefit from extending the equivalent polynomial expansion with more terms and extending the series convergence . It is worth looking at the raw numbers:
Function | x=-0.5 | x = 1 |
---|---|---|
log(1+x) | -0.693147 | 0.693147 |
Poly[3,0] | -0.66667 | 0.83333 |
Pade[2,1] | -0.6875 | 0.7 |
Pade[1,2] | -0.68571 | 0.70588 |
It is often the case that the rational polynomial with about the same number of terms in numerator and denominator provides the best compromise fit. It can more accurately model both poles and zeroes.
When optimised for equal absolute ripple error of 0.00024 on the range [0,1] the optimal coefficients are slightly different to the analytical result:
5.998550 0.698519 6 3.658462
An equal relative error approximation is also possible on the same range
6 0.780703 6 3.780244
It is also possible to optimise it for a region either side of zero which will obtain the best possible accuracy with a simple formula. In the OP's case -0.2 to +0.2 gives a coefficient set for [2,1] with minimax ripple 5.8e-5
5.999871522 1.019979324 6 4.024891596
1.490691459 0.253417837 1.490723379 1
The second set are in canonical form to avoid one multiplication. There are tables of these approximations dating way back. Hart, Computer Approximations 1968 is one such bible and although dated still useful.
An even better one is possible by considering the series for log((1-y)/(1+y)) where y = x/(2+x). It all depends how much accuracy you need.
log(1+x) = log((1+y)/(1-y)) ~= 2y(15 - 4y^2 )/(15 - 9y^2)
which matches the Taylor expansion 2y + 2y^3/3 + 2y^5/5 + ... an already better behaved series to begin with.
Upvotes: 1