user1492332
user1492332

Reputation: 41

speeding up scipy.integrate.quad()?

I'm trying to speed up the following code which computes a sum of integrals. To get good accuracy I need to increase L_max but that also makes the execution time much longer. The specific case below computes 0.999999 of the probability curve and takes about 65 seconds. I have heard of cython and its ability to speed up code, but i dont know how to use it or how it could help in this case. any ideas?

import math
from scipy import integrate
import numpy
from decimal import *
import time

start_time=time.time()
getcontext().prec=100
################################
def pt(t):
    first_term=math.exp(-lam*t)*((0.0001*I)**ni)*(t**(ni-1))*math.exp(-(0.0001*I)*t)/(math.factorial(ni-1))
    sum_term=0.0
    i=0
    while i<ni:
        sum_term=sum_term+((0.0001*I)**i)*(t**(i))*math.exp(-(0.0001*I)*t)/(math.factorial(i))
        i=i+1
    sum_term=lam*math.exp(-lam*t)*sum_term
    total=first_term+sum_term
    return total
#################################
def pLgt(t):
    return Decimal(((Decimal((0.0001*O)*t))**Decimal(L))*Decimal(math.exp(-(0.0001*O)*t)))/Decimal((math.factorial(L)))
######################################
def pL_t(t):
    return (pLgt(t))*Decimal(pt(t))
################################
lam=0.0001
beta=0.0001
ni=10
I=5969
O=48170
L_max=300.0
L=0.0
sum_term=0.0
sum_probability=0.0
while L<L_max:
    probability=(integrate.quad(lambda t: pL_t(t),0,800))[0]
    sum_probability=sum_probability+probability
    sum_term=sum_term+L*probability
    L=L+1.0
print time.time()-start_time
print sum_probability
print sum_term
print (sum_term-1)*0.46+6.5 

Upvotes: 1

Views: 2663

Answers (2)

user1149913
user1149913

Reputation: 4523

The pL_t function looks like a sum of gamma distributions, in which case, you should be able to evaluate the integral as a sum of partial incomplete gamma functions: http://docs.scipy.org/doc/scipy/reference/generated/scipy.special.gdtr.html#scipy.special.gdtr

Upvotes: 0

Alex Szatmary
Alex Szatmary

Reputation: 3571

Doing calculations in Decimal is probably slowing you down a lot without providing any benefit. Decimal calculations are much slower than float, ~100x slower, as noted by Kozyarchuk on Stack Overflow here. Using Decimal types in Numpy arrays keeps you from getting the speed benefits from Numpy.

Meanwhile, it's unclear to me that the results from scipy.integrate.quad would actually be of the level of precision you want; if you really do need arbitrary precision, you may have to write your quadrature code from scratch.

If you do need to use Decimal numbers, at least caching the Decimal representations of those numbers will provide you some speed benefit. That is, using

O=Decimal(48170)
L=Decimal(0.0)

and telling pLgt to just use O and L, will be faster.

Upvotes: 2

Related Questions