Reputation: 1151
I have a Python function which has 2 inputs X0 and X1 and 1 output Y:
def mySimulator(x):
y0 = 1. - x[0] * x[1]
y = [y0]
return y
I would like to estimate the variance of the output Y from Taylor decomposition when for example X0~Normal(1,3) and X1~Normal(2,4).
This is easy with OpenTURNS:
import openturns as ot
g = ot.PythonFunction(2,1,mySimulator)
distX0 = ot.Normal(1.,3.)
distX1 = ot.Normal(2.,4.)
X = ot.ComposedDistribution([distX0,distX1])
XRV = ot.RandomVector(X)
Y = ot.CompositeRandomVector(g, XRV)
taylor = ot.TaylorExpansionMoments(Y)
sigma2 = taylor.getCovariance()
print(sigma2)
The previous script prints:
>>> print(sigma2)
[[ 52 ]]
The covariance from the Taylor expansion is based on the gradient of the function and the variance of the marginals. The issue is that the gradient of the function is here computed from finite differences. This is a pity, since the exact gradient is here straightforward to compute:
def myGradient(x):
dy0dx0 = -x[1]
dy0dx1 = -x[0]
gradient = [[dy0dx0],[dy0dx1]]
return gradient
However, I do not know how to define a PythonFunction
which has a user-defined gradient: is that possible?
I searched in the doc, and found the following useful pages, which do not lead me to the solution:
Upvotes: 1
Views: 179
Reputation: 1151
Actually, I found the answer by myself. The PythonFunction
has a gradient
option which can set the gradient of a function.
I first defined the gradient with the following Python function, which returns an OpenTURNS Matrix
.
def myGradient(x):
dy0dx0 = -x[1]
dy0dx1 = -x[0]
gradient = ot.Matrix([[dy0dx0],[dy0dx1]])
return gradient
Then the PythonFunction
can be defined and used as follows: only the definition of the function changes, not the remaining of the script.
g = ot.PythonFunction(2,1,mySimulator,gradient=myGradient)
XRV = ot.RandomVector(X)
Y = ot.CompositeRandomVector(g, XRV)
taylor = ot.TaylorExpansionMoments(Y)
sigma2 = taylor.getCovariance()
Upvotes: 0