Reputation: 7338
I am trying to write a fast algorithm to compute the log gamma function. Currently my implementation seems naive, and just iterates 10 million times to compute the log of the gamma function (I am also using numba to optimise the code).
import numpy as np
from numba import njit
EULER_MAS = 0.577215664901532 # euler mascheroni constant
HARMONC_10MIL = 16.695311365860007 # sum of 1/k from 1 to 10,000,000
@njit(fastmath=True)
def gammaln(z):
"""Compute log of gamma function for some real positive float z"""
out = -EULER_MAS*z - np.log(z) + z*HARMONC_10MIL
n = 10000000 # number of iters
for k in range(1,n+1,4):
# loop unrolling
v1 = np.log(1 + z/k)
v2 = np.log(1 + z/(k+1))
v3 = np.log(1 + z/(k+2))
v4 = np.log(1 + z/(k+3))
out -= v1 + v2 + v3 + v4
return out
I timed my code against the scipy.special.gammaln implementation and mine is literally 100,000's times slower. So I am doing something very wrong or very naive (probably both). Although my answers are at least correct to within 4 decimal places at worst when compared to scipy.
I tried to read the _ufunc code implementing scipy's gammaln function, however I don't understand the cython code that the _gammaln function is written in.
Is there a faster and more optimised way I can calculate the log gamma function? How can I understand scipy's implementation so I can incorporate it with mine?
Upvotes: 1
Views: 2636
Reputation: 6482
Regarding your previous questions I guess an example on wrapping the scipy.special
functions to Numba is also useful.
Example
Wrapping Cython cdef functions is quite easy and portable as long as there are only simple datatypes involved (int, double, double*,...). For the documentation on how to call the scipy.special functions have a look at this. The function names you actually need to wrap the function are in scipy.special.cython_special.__pyx_capi__
. Function names, which can be called with different datatyps are mangled, but determing the right one is quite easy (just look at the datatypes)
#slightly modified version of https://github.com/numba/numba/issues/3086
from numba.extending import get_cython_function_address
from numba import vectorize, njit
import ctypes
import numpy as np
_PTR = ctypes.POINTER
_dble = ctypes.c_double
_ptr_dble = _PTR(_dble)
addr = get_cython_function_address("scipy.special.cython_special", "gammaln")
functype = ctypes.CFUNCTYPE(_dble, _dble)
gammaln_float64 = functype(addr)
@njit
def numba_gammaln(x):
return gammaln_float64(x)
Usage within Numba
#Numba example with loops
import numba as nb
import numpy as np
@nb.njit()
def Test_func(A):
out=np.empty(A.shape[0])
for i in range(A.shape[0]):
out[i]=numba_gammaln(A[i])
return out
Timings
data=np.random.rand(1_000_000)
Test_func(A): 39.1ms
gammaln(A): 39.1ms
Of course you can easily parallelize this function and outperform the single-threaded gammaln implementation in scipy and you can call this function efficiently within any Numba compiled function.
Upvotes: 0
Reputation: 9877
The runtime of your function will scale linearly (up to some constant overhead) with the number of iterations. So getting the number of iterations down is key to speeding up the algorithm. Whilst computing the HARMONIC_10MIL
beforehand is a smart idea, it actually leads to worse accuracy when you truncate the series; computing only part of the series turns out to give higher accuracy.
The code below is a modified version of the code posted above (although using cython
instead of numba
).
from libc.math cimport log, log1p
cimport cython
cdef:
float EULER_MAS = 0.577215664901532 # euler mascheroni constant
@cython.cdivision(True)
def gammaln(float z, int n=1000):
"""Compute log of gamma function for some real positive float z"""
cdef:
float out = -EULER_MAS*z - log(z)
int k
float t
for k in range(1, n):
t = z / k
out += t - log1p(t)
return out
It is able to obtain a close approximation even after 100 approximations as shown in the figure below.
At 100 iterations, its runtime is of the same order of magnitude as scipy.special.gammaln
:
%timeit special.gammaln(5)
# 932 ns ± 19 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
%timeit gammaln(5, 100)
# 1.25 µs ± 20.3 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
The remaining question is of course how many iterations to use. The function log1p(t)
can be expanded as a Taylor series for small t
(which is relevant in the limit of large k
). In particular,
log1p(t) = t - t ** 2 / 2 + ...
such that, for large k
, the argument of the sum becomes
t - log1p(t) = t ** 2 / 2 + ...
Consequently, the argument of the sum is zero up to second order in t
which is negligible if t
is sufficiently small. In other words, the number of iterations should be at least as large as z
, preferably at least an order of magnitude larger.
However, I'd stick with scipy
's well-tested implementation if at all possible.
Upvotes: 3
Reputation: 2468
I managed to get a performance increase of roughly 3x by trying the parallel mode of numba and using mostly vectorized functions (sadly, numba can't understand numpy.substract.reduce
)
from functools import reduce
import numpy as np
from numba import njit
@njit(fastmath=True, parallel=True)
def gammaln_vec(z):
out = -EULER_MAS*z - np.log(z) + z*HARMONC_10MIL
n = 10000000
v = np.log(1 + z/np.arange(1, n+1))
return out-reduce(lambda x1, x2: x1-x2, v, 0)
Times:
#Your function:
%timeit gammaln(1.5)
48.6 ms ± 1.23 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
#My function:
%timeit gammaln_vec(1.5)
15 ms ± 340 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
#scpiy's function
%timeit gammaln_sp(1.5)
1.07 µs ± 18.7 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
So still, you will be much better off by using scipy's function. Without C code I don't know how to break it down further
Upvotes: 0