Reputation: 23
The code I have uses monte carlo techniques to plot points where an electron would be found around a hydrogen atom but when it comes to plotting this the program can take upwards of a few hours to produce the graph I've been looking at ways to reduce this time and have tried using numba I don't know if I'm using it correctly but it hasn't made any difference
can someone please help me
from scipy.special import genlaguerre, sph_harm
from numpy import random, linspace, sqrt, pi, arccos, exp
from matplotlib.pyplot import plot, figure, show
from numba import jit
n = 2
l = 1
m = -1
Lx = 3*10**-9
Ly = Lx
a_0 = 5.29*10**-11
x = linspace(0,Lx, 100000)
fi = linspace(0,2*pi, 100000)
def fact(x):
if x == 0:
return 1
else:
return x*fact(x-1)
def row(r):
return (2*r)/(n*a_0)
def Hydrograd(r):
return -1*sqrt((2/(n*a_0))**3*(fact(n - l- 1))/(2*n*(fact(n+l))**3))*exp(-1*row(r)/2)*row(r)**l*genlaguerre(n-l-1, 2*l+1)(row(r))
def Hydrogsph(theta, phi):
return sph_harm(m,l,theta,phi).real
@jit
def HydroFunc(r,theta, phi):
return abs(Hydrograd(r))**2*abs(Hydrogsph(theta, phi))**2
def rFinder(x,y,z):
return sqrt(x**2 + y**2 + z**2)
def phiFinder(x,y,z):
return arccos(x/rFinder(x,y,z))
i = 0
p = 50
@jit
def monty(Lx,Ly,x,fi):
i = 0
p = 50
X = []
Y = []
while i < p:
xr = random.uniform(-Lx,Lx)
yr = random.uniform(-Ly,Ly)
if HydroFunc(rFinder(xr,yr,0),0,phiFinder(xr,yr,0))>random.uniform(0,max(HydroFunc(x,0,fi))/100):
X.append(xr)
Y.append(yr)
i += 1
print(i)
figure()
plot(X,Y,'.')
show()
monty(Lx,Ly,x,fi)
Upvotes: 2
Views: 1860
Reputation: 16164
just had a quick play and noticed that:
max(HydroFunc(x,0,fi))/100
looks constant.
you could therefore change your monty
function to do:
def monty(Lx,Ly,x,fi):
rmax = max(HydroFunc(x,0,fi)) / 100
# ...
if HydroFunc(rFinder(xr,yr,0),0,phiFinder(xr,yr,0)) > random.uniform(0, rmax):
this speeds things up a lot for me. I notice you're doing other redundant calculations, e.g. phiFinder
calls rFinder
while you call outside with the same parameters.
your rejection rate is also very high, you could maybe look at shrinking your proposal distribution. a markov-chain algorithm might help as well, e.g:
def mcmc(sigma, nsamples):
rmax = max(HydroFunc(x, 0, fi)) / 100
x1, y1 = 1e-9, 1e-9
p1 = min(rmax, HydroFunc(rFinder(x1,y1,0),0,phiFinder(x1,y1,0)))
for _ in range(nsamples):
x2, y2 = random.normal([x1, y1], scale=sigma)
p2 = min(rmax, HydroFunc(rFinder(x2,y2,0),0,phiFinder(x2,y2,0)))
if p2 / p1 > random.uniform():
x1, y1, p1 = x2, y2, p2
yield x1, y1
gives me 100k (auto-correlated) samples in ~15 seconds, while it would take ~350 seconds to get the same from your method (after moving that rmax
calculation out of the inner loop). as far as I can tell this samples from the same distribution, e.g. I run and plot with:
out = np.array(list(mcmc(8e-10, 100000)))
# thin out the chain to reduce autocorrelation
ii = range(0, len(out), 5)
plt.scatter(out[ii,0], out[ii,1], 10, edgecolor='none', alpha=0.1)
plt.xlim(-1e-9, 1e-9)
plt.ylim(-1e-9, 1e-9)
(the orange points are 50 samples from your original implementation that took 30 seconds)
I've just updated to LLVM 9 so numba doesn't work for me at the moment, but I'd suggest looking at https://numba.pydata.org/numba-doc/dev/user/5minguide.html to figure out what it's doing. I've previously found the nopython=True
switch to be very useful.
Upvotes: 1