Willi SIhombing
Willi SIhombing

Reputation: 15

How to make fast looping for matrix calculation in python

I have the problem that this for loop takes so much time to complete. I want a faster way to complete it. my code is:

dx = 20
dy = 20
dz = 20

x = np.arange(0, 1201, dx)
y = np.arange(0, 1001, dy)
z = np.arange(20, 501, dz)
drho = 3000  # Delta Rho (Density Contrast) kg/m^3

# Input Rho to Model
M = np.zeros((len(z), len(x), len(y)))
M[6:16, 26:36, 15:25] = drho
m = np.array(M.flat)
# p (61, 1525)
# M(25, 61)
#  m(1525,)
# Station Position
stx, sty = np.meshgrid(x, y)
stx = np.array(stx.flat)
sty = np.array(sty.flat)
stz = np.zeros(len(stx))

# Make meshgrid
X, Y, Z = np.meshgrid(x, y, z)
X = np.array(X.flat)
Y = np.array(Y.flat)
Z = np.array(Z.flat)

p = np.zeros((len(stx), len(X)))

# p(3111, 77775)
for i in range(len(X)):
    for j in range(len(stx)):
        p[j, i] = (Z[i] - stz[j]) / ((Z[i] - stz[j]) ** 2 + (X[i] - stx[j]) ** 2 + (Y[i] - sty[j]) ** 2) ** (3/2)

The iterations variable sometimes is larger than 241 millions and that takes like for ever.

Upvotes: 1

Views: 86

Answers (2)

yatu
yatu

Reputation: 88226

You can improve on performance here by using broadcasting with:

p = (Z - stz[:,None]) / ((Z - stz[:,None])**2  + (X - stx[:,None])**2 + (Y - sty[:,None])**2) ** (3/2)

Note that the improvement in performance here will be at the expense of memory efficiency, as pointed out by jerome.


Check and timings using instead:

x = np.arange(0, 121, dx)
y = np.arange(0, 101, dy)
z = np.arange(20, 51, dz)

def op():
    p = np.zeros((len(stx), len(X)))
    for i in range(len(X)):
        for j in range(len(stx)):
            p[j, i] = (Z[i] - stz[j]) / ((Z[i] - stz[j]) ** 2 + (X[i] - stx[j]) ** 2 + (Y[i] - sty[j]) ** 2) ** (3/2)
    return p

def ap_1():
    return (Z - stz[:,None]) / ((Z - stz[:,None])**2  + (X - stx[:,None])**2 + (Y - sty[:,None])**2) ** (3/2)

%timeit p = op()
# 44.5 ms ± 3.58 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

%timeit p_ = ap_1()
# 169 µs ± 2.27 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

np.allclose(p_, p)
# True

We can further boost the performance and improve memory efficency by letting numexpr take care of the arithmetic:

import numexpr as ne

def ap_2():
    return ne.evaluate('(Z - stz2D) / ((Z - stz2D)**2  + (X - stx2D)**2 + (Y - sty2D)**2) ** (3/2)',
           {'stz2D':stz[:,None], 'stx2D':stx[:,None], 'sty2D':sty[:,None]})

%timeit ap_2()
# 106 µs ± 6.34 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

So with the second approach we get a 420x speedup

Upvotes: 2

Parthanon
Parthanon

Reputation: 388

A really cool thing about python is that map(), filter() and reduce() functions are heavily optimised for large datasets.

If you replaced your for loops with these functions you should see a small performance improvement. (Or a big one.. maybe).

Upvotes: 0

Related Questions