EpsilonGreedy
EpsilonGreedy

Reputation: 83

Drake flattens by channel

Why does Drake flatten along the channel dimension instead of along the batch dimension?

For a [n, c] vector, numpy/torch/tensorflow have a flatten operation and its inverse, a reshape operation

This flatten operation flattens along the batch dimension "n", whereas Drake flattens along "c"

Is there some built in function to get the flattening along the batch dimension?

For example, I want to access:

prog.initial_guess()

But I want it to be flattened along the batch dimension, according to the original matrix-shaped decision variables

Upvotes: 1

Views: 42

Answers (1)

Hongkai Dai
Hongkai Dai

Reputation: 2766

Drake is written in C++ with a python binding. In C++ we use Eigen as our linear algebra library. Eigen::Matrix is default to column major, namely it stores data in the order of mat[0, 0], mat[1, 0], mat[2, 0], ..., mat[0, 1], mat[1, 1], .... This is different from how numpy/pytorch stores matrix, which is default to a row major.

Specifically for your question about getting the initial guess of a matrix variable, you can call the function

initial_guess = prog.GetInitialGuess(variable_matrix)

it will return a numpy matrix of float, where initial_guess[i, j] is the initial guess value of the variable variable_matrix[i, j], so you don't need to worry about matrix flattening/reshaping.

Upvotes: 3

Related Questions