Reputation: 147
I'd like to numerically solve an equation involving a MatrixSymbol. Here's a basic example:
import sympy as sy
v = sy.MatrixSymbol('v', 2, 1)
equation = (v - sy.Matrix([17, 23])).as_explicit()
I'd like something like:
sy.nsolve(equation, v, sy.Matrix([0,0]))
But because nsolve
does not accept MatrixSymbols, I've made a cludgy workaround that gives the correct output of Matrix([[17.0], [23.0]])
:
vx, vy = sy.symbols('v_x v_y')
sy.nsolve(equation.subs(v, sy.Matrix([vx, vy])), [vx, vy], [0,0])
Essentially, I've converted a MatrixSymbol to a matrix of Symbols to make nsolve
happy.
Is there a better way I should be doing this?
Edit: the workaround can be simplified to:
vseq = sy.symbols('a b') #names must be distinct
sy.nsolve(equation.subs(v, sy.Matrix(vseq)), vseq, [0,0])
But there ought to be a cleaner way to convert a MatrixSymbol to a sequence of Symbols, or a way to avoid needing to do so in the first place.
Upvotes: 1
Views: 331
Reputation:
A cleaner way is to create a Matrix from symarray
:
v = sy.Matrix(sy.symarray("v", (2,)))
equation = v - sy.Matrix([17, 23])
sy.nsolve(equation, v, [0, 0])
Here, symarray
creates a (NumPy) array of symbols [v_0, v_1]
which is then turned into a Matrix. One can also use sy.symarray("v", (2, 1))
so it's a double array, but since SymPy's Matrix
constructor is cool with 1D inputs, this is not necessary.
Upvotes: 1