Ohm
Ohm

Reputation: 2442

Using Numba to improve finite-differences laplacian

I am using Python to solve a reaction-diffusion system of equations (Fitz-Hugh-Nagumo model). I would like to learn how to use Numba in order to accelerate the calculation. I am currently importing the following laplacian.py module to my integration script:

def neumann_laplacian_1d(u,dx2):
    """Return finite difference Laplacian approximation of 2d array.
    Uses Neumann boundary conditions and a 2nd order approximation.
    """
    laplacian = np.zeros(u.shape)
    laplacian[1:-1] =  ((1.0)*u[2:] 
                       +(1.0)*u[:-2]
                       -(2.0)*u[1:-1])
    # Neumann boundary conditions
    # edges
    laplacian[0]  =  ((2.0)*u[1]-(2.0)*u[0])
    laplacian[-1] =  ((2.0)*u[-2]-(2.0)*u[-1])

    return laplacian/ dx2

Where u is a NumPy 1D array that stands for one of the fields. I tried to add the decorator @autojit(target="cpu") after importing from numba import autojit. I didn't see any improvement in the calculation. Could anyone give me a hint how to use Numba properly in this case?

The input array I have used here is

a = random.random(252)

And so I've compared the performance with the line:

%timeit(neumann_laplacian_1d(a,1.0))

With Numba I got:

%timeit(neumann_laplacian_1d(a,1.0))
The slowest run took 22071.00 times longer than the fastest. This could mean that an intermediate result is being cached 
1 loops, best of 3: 14.1 µs per loop

Without Numba I got (!!):

%timeit(neumann_laplacian_1d(a,1.0))
The slowest run took 11.84 times longer than the fastest. This could mean that an intermediate result is being cached 
100000 loops, best of 3: 9.12 µs per loop

Numba actually make it slower..

Upvotes: 1

Views: 833

Answers (1)

M4rtini
M4rtini

Reputation: 13539

I am unable to replicate your results.

Python version: 3.4.4 |Anaconda 2.4.1 (64-bit)| (default, Jan 19 2016, 12:10:59) [MSC v.1600 64 bit (AMD64)]

numba version: 0.23.1

import numba as nb
import numpy as np

def neumann_laplacian_1d(u,dx2):
    """Return finite difference Laplacian approximation of 2d array.
    Uses Neumann boundary conditions and a 2nd order approximation.
    """
    laplacian = np.zeros(u.shape)
    laplacian[1:-1] =  ((1.0)*u[2:] 
                       +(1.0)*u[:-2]
                       -(2.0)*u[1:-1])
    # Neumann boundary conditions
    # edges
    laplacian[0]  =  ((2.0)*u[1]-(2.0)*u[0])
    laplacian[-1] =  ((2.0)*u[-2]-(2.0)*u[-1])

    return laplacian/ dx2

@nb.autojit(nopython=True)
def neumann_laplacian_1d_numba(u,dx2):
    """Return finite difference Laplacian approximation of 2d array.
    Uses Neumann boundary conditions and a 2nd order approximation.
    """
    laplacian = np.zeros(u.shape)
    laplacian[1:-1] =  ((1.0)*u[2:] 
                       +(1.0)*u[:-2]
                       -(2.0)*u[1:-1])
    # Neumann boundary conditions
    # edges
    laplacian[0]  =  ((2.0)*u[1]-(2.0)*u[0])
    laplacian[-1] =  ((2.0)*u[-2]-(2.0)*u[-1])

    return laplacian/ dx2

a = np.random.random(252)
#run once to make the JIT do it's work before timing
neumann_laplacian_1d_numba(a, 1.0)


%timeit neumann_laplacian_1d(a, 1.0)
%timeit neumann_laplacian_1d_numba(a, 1.0)

>>10000 loops, best of 3: 21.5 µs per loop
>>The slowest run took 4.49 times longer than the fastest. This could mean that an intermediate result is being cached 
>>100000 loops, best of 3: 3.53 µs per loop

I see similar results for python 2.7.11 and numba 0.23

>>100000 loops, best of 3: 19.1 µs per loop
>>The slowest run took 8.55 times longer than the fastest. This could mean that an intermediate result is being cached 
>>100000 loops, best of 3: 2.4 µs per loop

Upvotes: 1

Related Questions