Tin Tran
Tin Tran

Reputation: 11

Pydrake: Evaluate PiecewisePolynomial with Symbolic Expression

Is there a way to call trajectory.value(t) with t being a symbolic value and trajectory being a pydrake.trajectories.PiecewisePolynomial class? My use case would be using this trajectory inside a DirectTranscription cost explicitly. The cost I want to write is in the form of

J = Σ (x[k] - xd[k])^T Q (x[k] - xd[k])

s.t. x[k+1] = f(x[k], u[k])

I have tried doing this with simple numpy function for xd[k]. The skeleton would look like

xd = lambda t: np.sin(t)
dirtran = DirectTranscription(sys, sys.CreateDefaultContext(), horizon + 1)
x_k = dirtran.state()
t_k= dirtran.time()

x_k_decision = x_k - xd(t_k[0])
dirtran.AddRunningCost(x_k_decision.transpose() @ Q @ x_k_decision)

And everything works well. The solver will me a correct input trajectory that would track xd = sin(t). Now, I would want to use the same approach to track a trajectory coming out of a trajectory optimization such as direct collocation, to be exact, I want to track a trajectory that is the return of ReconstructStateTrajectory, which is a PiecewisePolynomial. The skeleton code would look like

xd = dircol.ReconstructStateTrajectory(result)
dirtran = DirectTranscription(sys, sys.CreateDefaultContext(), horizon + 1)
x_k = dirtran.state()
t_k= dirtran.time()

x_k_decision = x_k - xd.value(t_k[0])
dirtran.AddRunningCost(x_k_decision.transpose() @ Q @ x_k_decision)

Well, this would not work because t_k is actually a (Variable 't(0)', Continuous), which is not the correct data type for .value(). I looked into the C++ source code and found out that the actual value for t(0) is only substituted in DoAddRunningCost with SubstitutePlaceholderVariables, only then we can evaluate .value() properly, but that's too late because the program already throw the error The following environment does not have an entry for the variable t(0) on line x_k_decision = x_k - xd.value(t_k[0]). One obvious solution would be writing the cost manually in a for loop and increment the time step myself (so I can use .value() function with a float, not a Variable placeholder) but that would kind of destroy the elegance of DirectTranscription. So is there anyway I can use PiecewisePolynomial in AddRunningCost ?

I am aware that we can change the coordinate so that we will solve for (x[k] - xd[k]) instead of x[k], but you normally would have take extra care when you write the MathematicalProgram. Being able to write the something like the first skeleton (with xd = lambda t: np.sin(t)) is really expressive (and it works, it's just that I can't find a way to make it work when the reference is a solution coming out of DirectCollocation)

Upvotes: 0

Views: 99

Answers (1)

Russ Tedrake
Russ Tedrake

Reputation: 5533

This is a known limitation of our PiecewisePolynomial class.

While I agree it might be elegant to fix this, I think it's almost never what you would want? If we were to make symbolic::Expression work for this, then the Expression it would return would be some nasty list of conditionals which check the symbolic variable t relative to the segment times, and only then has a nice polynomial expression in t.

If you skip over the logical portion (deciding which segment you are in), then the rest of the evaluation can be done with Expression. For example

import numpy as np
from pydrake.all import PiecewisePolynomial_, Variable, Expression

t = Variable('t')
pp = PiecewisePolynomial_[Expression].FirstOrderHold([0, 1, 2],
                                                np.random.rand(2, 3))
print(pp.getPolynomial(segment_index=0, row=0, col=0).EvaluateUnivariate(t))

returns an expression of the form

(0.99826839350064878 - 0.92294844889865379 * t)

You'll need this PR to use it.

For the specific cost you're trying to add, I would think that using AutoDiff would be much more appropriate. The cast wasn't quite as simple as I would have liked, but it can be done:

import numpy as np
from pydrake.all import PiecewisePolynomial_, AutoDiffXd, Polynomial_

pp = PiecewisePolynomial_[float].FirstOrderHold([0, 1, 2],
                                                np.random.rand(2, 3))

def convert_polymat_to_autodiff(mat):
    mat_ad = np.empty(mat.shape, dtype=Polynomial_[AutoDiffXd])
    for i in range(mat.shape[0]):
        for j in range(mat.shape[1]):
            mat_ad[i, j] = Polynomial_[AutoDiffXd](
                mat[i, j].GetCoefficients())
    return mat_ad

polys = [
    convert_polymat_to_autodiff(pp.getPolynomialMatrix(i))
    for i in range(pp.get_number_of_segments())
]
pp_ad = PiecewisePolynomial_[AutoDiffXd](polys, pp.get_segment_times())

print(pp_ad.value(0.4))

Once you have that, then you could implement your cost as a generic function. Something like:

def cost_fun(vars):
    t = vars[0]
    x = vars[1:]
    if x.dtype == AutoDiffXd:
        x_decision = x - pp_ad.value(t)
    else:
        x_decision = x - pp.value(t)
    return x_decision.transpose() @ Q @ x_decision

Check out the nonlinear_program Drake tutorial if you haven't.

Upvotes: 0

Related Questions