Reputation: 1739
So, for a Monte-Carlo class, I created a uniform random number generator to simulate a normal distribution and use it for MC option pricing, and I'm going terribly wrong somewhere. It uses a simple linear congruential generator (lcg) which generates a random vector that is fed into an numerical approximation of a inverse normal distribution (beasley-springer-morrow algorithm) to generate standard normally distributed values (elaboration on the exact process can be found here).
This is my code so far.
Rng:
def lcrm_generator(num, a, seed, c):
mod = 4294967296 # 2^32
val = int(seed) #int() is to ensure seeds with a leading zero still get accepted by the program
rng = []
for i in range(num):
val = (a * val + c) % mod
rng.append(val/(mod-1))
return rng
Inverse normal approximator:
def bsm_algorithm(u):
# These are my necessary initial constants
a0 = 2.50662823884; a1 = -18.61500062529; a2 = 41.39119773534; a3 = -25.44106049637;
b0 = -8.47351093090; b1 = 23.08336743743; b2 = -21.06224101826; b3 = 3.13082909833;
c0 = 0.3374754822726147; c1 = 0.9761690190917186; c2 = 0.1607979714918209; c3 = 0.0276438810333863;
c4 = 0.0038405729373609; c5 = 0.0003951896511919; c6 = 0.0000321767881768; c7 = 0.0000002888167364;
c8 = 0.0000003960315187;
x = [0]*len(u)
for i in range(len(u)):
y = u[i] - 0.5
if abs(y) < 0.42:
r = y**2
x[i] = y*(((a3*r+a2)*r+a1)*r+a0)/((((b3*r+b2)*r+b1)*r+b0)*r+1)
else:
r = u[i]
if y > 0:
r = 1 - u[i]
r = log(-log(r))
x[i] = c0+r*(c1+r*(c2+r*(c3+r*(c4+r*(c5+r*(c6+r*(c7+r*c8)))))))
if y < 0:
x[i] = -x[i]
return x
Combining these two with the following and drawing the histogram shows the data looks correctly normal,
a=lcrm_generator(100000,301,"0",21)
b = bsm_algorithm(a)
plt.hist(b, bins=100)
plt.show()
And option pricing function:
def LookbackOP(S,K,r,sigma,intervals,sims,Call_Put=1):
## My objects that will determine the option prices.
path = [0]*intervals
values = [0]*sims
## Objects to hold the random nums used for simulation.
randuni = [0]*sims
randnorm = [0]*sims
for i in range(sims):
randuni[i] = lcrm_generator(intervals,301,i,21)
randnorm[i] = bsm_algorithm(randuni[i])
# Generating the simulation one by one.
for i in range(sims):
path[0] = 1
## Below is to generate one whole simulated path.
################## MY INCORRECT WAY ##################
for j in range(1,intervals):
path[j] = path[j-1]*exp((r - .5*sigma**2)*(1/intervals) + sqrt(1/intervals)*randnorm[i][j])
################## CORRECT BUILT-IN WAY ##################
# path[j] = path[j-1]*exp((r - .5*sigma**2)*(1/intervals) + sqrt(1/intervals)*np.random.normal(0,1))
## For each separate simulation, price the option either as a call or a put.
if Call_Put == 1:
values[i] = max(S*max(path)-K,0)
elif Call_Put == 0:
values[i] = max(K-S*min(path),0)
else:
print("Error: You inputted something other than '1 = Call', or '0 = Put'")
plt.hist(values,bins=30)
plt.show()
## To get expected return of option by takeing their avg.
option_value = np.mean(values)
print(option_value)
return option_value
In the last block of code, my error is pointed out, which seemingly can be fixed by simply using numpy's normal rng. Using one versus the other produces drastically different answers, and I'm tempted to trust numpy over myself, but my code looks normal, so where am I going wrong.
Upvotes: 1
Views: 247
Reputation: 20080
First,
a=lcrm_generator(100000,301,"0",21)
this looks weird - why do you need a character for seed? Anyway, good parameters are in the table here: https://en.wikipedia.org/wiki/Linear_congruential_generator. But problem is not LCG, I believe, but there might be systemic difference in your gaussian.
I ran code
from scipy.stats import norm
q = lcrm_generator(100000, 301, "0", 21)
g = bsm(q)
param = norm.fit(g)
print(param)
for 100 000, 1 000 000 and 10 000 000 samples, and my output is
(0.0009370998731855792, 0.9982155665317061)
(-0.0006429485100838258, 0.9996875045073682)
(-0.0007464875819397183, 1.0002711142625116)
and there is no improvement between 1 000 000 and 10 000 000 samples. Basically, you use some approximation for gaussian inverse function, and those are just artifacts of approximation, nothing to be done about it.
Numpy, I believe, is using one of the precise normal sampling methods (Box-Muller, I think)
Upvotes: 1