Reputation: 8439
I just accidentally discovered that I can mix sympy expressions up with numpy arrays:
>>> import numpy as np
>>> import sympy as sym
>>> x, y, z = sym.symbols('x y z')
>>> np.ones(5)*x
array([1.0*x, 1.0*x, 1.0*x, 1.0*x, 1.0*x], dtype=object)
# I was expecting this to throw an error!
# sum works and collects terms etc. as I would expect
>>> np.sum(np.array([x+0.1,y,z+y]))
x + 2*y + z + 0.1
# dot works too
>>> np.dot(np.array([x,y,z]),np.array([z,y,x]))
2*x*z + y**2
>>> np.dot(np.array([x,y,z]),np.array([1,2,3]))
x + 2*y + 3*z
This is quite useful for me, because I'm doing both numerical and symbolic calculations in the same program. However, I'm curious about the pitfalls and limitations of this approach --- it seems for example that neither np.sin
nor sym.sin
are supported on Numpy arrays containing Sympy objects, since both give an error.
However, this numpy-sympy integration doesn't appear to be documented anywhere. Is it just an accident of how these libraries are implemented, or is it a deliberate feature? If the latter, when is it designed to be used, and when would it be better to use sympy.Matrix
or other solutions? Can I expect to keep some of numpy's speed when working with arrays of this kind, or will it just drop back to Python loops as soon as a sympy symbol is involved?
In short I'm pleased to find this feature exists, but I would like to know more about it!
Upvotes: 2
Views: 252
Reputation: 231510
I just came across a relevant note in the latest numpy
release notes (https://docs.scipy.org/doc/numpy-1.15.1/release.html)
Comparison ufuncs accept dtype=object, overriding the default bool
This allows object arrays of symbolic types, which override == and other operators to return expressions, to be compared elementwise with np.equal(a, b, dtype=object).
I think that means this works, but didn't before:
In [9]: np.array([x+.1, 2*y])==np.array([.1+x, y*2])
Out[9]: array([ True, True])
Upvotes: 0
Reputation:
This is just NumPy's support for arrays of objects. It is not specific to SymPy. NumPy examines the operands and finds not all of them are scalars; there are some objects involved. So it calls that object's __mul__
or __rmul__
, and puts the result into an array of objects. For example: mpmath objects,
>>> import mpmath as mp
>>> np.ones(5) * mp.mpf('1.23')
array([mpf('1.23'), mpf('1.23'), mpf('1.23'), mpf('1.23'), mpf('1.23')],
dtype=object)
or lists:
>>> np.array([[2], 3])*5
array([list([2, 2, 2, 2, 2]), 15], dtype=object)
>>> np.array([2, 3])*[[1, 1], [2]]
array([list([1, 1, 1, 1]), list([2, 2, 2])], dtype=object)
Can I expect to keep some of numpy's speed when working with arrays of this kind,
No. NumPy object arrays have no performance benefits over Python lists; there is probably more overhead in accessing elements than would be in a list. Storing Python objects in a Python list vs. a fixed-length Numpy array
There is no reason to use such arrays if a more specific data structure is available.
Upvotes: 2