Reputation: 39457
I am plotting a famous function and its derivative here.
The famous function is the one which arises from the Bernoulli's inequality.
I wonder if there's some way to calculate the derivative without "hard-coding it"
i.e. by just using some library and calling derivative(f)
or something like that.
import numpy as np
import matplotlib.pyplot as plt
# a,b - the two ends of our interval
a = -2.2
b = +0.25
n = 10
# function f(t) = (1+t)^n - n*t - 1
def f(t):
'''
s = 1
for i in range(n):
s = s * (1 + 1 * t)
return s - n * t - 1
'''
return np.power(1 + t, n) - n * t - 1
# derivative f'(t) = n*(1+t)^(n-1) - n
def f1(t):
'''
s = 1
for i in range(n-1):
s = s * (1 + 1 * t)
return n * s - n
'''
return n * np.power(1 + t, n-1) - n
t = np.linspace(a, b, 4000)
g = f(t)
g1 = f1(t)
plt.plot(t, g, 'r') # plotting t, g separately
plt.plot(t, g1, 'g') # plotting t, g1 separately
plt.axhline(0, color='k')
plt.axvline(0, color='k')
print("=====================")
print(f(-2))
print(f(-1.5))
print(f(-1))
print(f(-0.5))
print(f(0))
print("=====================")
print(f1(-2))
print(f1(-1.5))
print(f1(-1))
print(f1(-0.5))
print(f1(0))
plt.grid()
plt.show()
Upvotes: 1
Views: 3585
Reputation: 80299
You can use sympy to calculate the derivative symbolically. If you have a nice mathematical expression, this gives a better accuracy than numerical methods.
Sympy has its own plot functions, but they can be cumbersome if you want to combine many elements. In those cases, it can be easier to use lambdify to convert them to numpy functions.
from sympy import Pow, lambdify
from sympy.abc import t, n
f = Pow(1 + t, n) - n * t - 1
f1 = f.diff(t) # result: -n + n*(t + 1)**n/(t + 1)
f_np = lambdify(t, f.subs(n, 10))
f1_np = lambdify(t, f1.subs(n, 10))
import numpy as np
from matplotlib import pyplot as plt
a = -2.2
b = +0.25
x = np.linspace(a, b, 1000)
plt.plot(x, f_np(x), 'r')
plt.plot(x, f1_np(x), 'g')
plt.axhline(0, color='k')
plt.axvline(0, color='k')
plt.show()
PS: Purely staying within sympy, plotting can happen as follows:
from sympy import Pow, plot
from sympy.abc import t, n
a = -2.2
b = +0.25
f = Pow(1 + t, n) - n * t - 1
f1 = f.diff(t)
p1 = plot(f.subs(n, 10), (t, a, b), line_color='r', show=False)
p2 = plot(f1.subs(n, 10), (t, a, b), line_color='g', show=False)
p1.append(p2[0])
p1.show()
Upvotes: 1
Reputation: 937
Automatic differentiation is a great tool to do this. Check out https://github.com/HIPS/autograd.
import autograd.numpy as np
import matplotlib.pyplot as plt
from autograd import elementwise_grad as egrad
# a,b - the two ends of our interval
a = -2.2
b = +0.25
n = 10
# function f(t) = (1+t)^n - n*t - 1
def f(t):
'''
s = 1
for i in range(n):
s = s * (1 + 1 * t)
return s - n * t - 1
'''
return np.power(1 + t, n) - n * t - 1
# derivative f'(t) = n*(1+t)^(n-1) - n
f1 = egrad(f)
t = np.linspace(a, b, 4000)
g = f(t)
g1 = f1(t)
plt.plot(t, g, 'r') # plotting t, g separately
plt.plot(t, g1, 'g') # plotting t, g1 separately
plt.axhline(0, color='k')
plt.axvline(0, color='k')
print("=====================")
print(f(-2))
print(f(-1.5))
print(f(-1))
print(f(-0.5))
print(f(0))
print("=====================")
print(f1(-2.0))
print(f1(-1.5))
print(f1(-1.0))
print(f1(-0.5))
print(f1(0.0))
Note that I had to change the arguments passed to f1
to be floats, but otherwise it generates the same plot. This is in general how all of the "deep learning" frameworks such as tensorflow, Torch, etc. compute gradients.
This avoids having to analytically compute the derivative yourself and also avoids issues with numerical differentiation.
Upvotes: 1
Reputation: 39052
Among many others, there are two following methods:
Method 1
You can use derivative
from scipy
that takes a function f
and returns its derivative w.r.t t
. So you don't have to define the derivative function f1(t)
explicitly.
from scipy.misc import derivative
def f(t):
return np.power(1 + t, n) - n * t - 1
# Rest of the code
t = np.linspace(a, b, 4000)
g = f(t)
plt.plot(t, g, 'r') # plotting t, g separately
plt.plot(t, derivative(f, t, dx=0.001), 'g')
Method 2
You can use gradient
function of NumPy which uses central differences and returns the same shape as the input array.
t, dt = np.linspace(a, b, 4000, retstep=True)
g1 = np.gradient(f(t), dt)
plt.plot(t, g1, 'g')
Upvotes: 2