Reputation: 23
I was trying to measure time taken to evaluate mathematical function, in both numpy way and basic pythonic way. First, I measured how much time it takes to evaluate function.
import numpy as np
import scipy as sp
import scipy.integrate as si
def func1(x):
return np.piecewise(x, [x<=2, x>2],
[lambda x: 2*x + pow(2, x),
lambda x: -(pow(x, 2) + 2)]
)
def func2(x):
if (x<=2): return 2*x + pow(2,x)
else: return -(pow(x, 2) + 2)
data = np.linspace(-10, 10, 1000)
%timeit data1 = func1(data)
data2 = []
%timeit for i in range(0, np.size(data)): data2.append(func2(data[i]))
Actually, the result above was as I expected, numpy way was much faster than the basic way.
35.2 µs ± 110 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
771 µs ± 10.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
However, strange thing happens in scipy.integrate.quad. The integration speed was much faster in basic pythonic way. Why is the case?
%timeit si.quad(func1, -10, 10)
%timeit si.quad(func2, -10, 10)
5.59 ms ± 25.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
187 µs ± 39.3 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Upvotes: 2
Views: 747
Reputation: 53089
np.piecewise
is only fast when leveraging vectorization. When inputs are scalar it is much slower than the python equivalent, presumably due to overheads:
from timeit import timeit
kwds = dict(globals=globals(), number=1000)
print(timeit("data1 = func1(data)", **kwds))
data2 = []
print(timeit("for i in range(0, np.size(data)): data2.append(func2(data[i]))", **kwds))
data3 = []
print(timeit("for i in range(0, np.size(data)): data3.append(func1(data[i]))", **kwds))
0.06953751016408205
0.957529284991324
15.591511018108577
Now, the mere fact that it works with func2
, the docs and the way popular adaptive integration schemes work all suggest that quad
calls its integrand with scalar arguments one after the other which would explain your observation.
Upvotes: 1