Reputation: 41
I need help determining experimentally the computing complexity of the determinant of a matrix nxn
My code:
import numpy as np
import timeit
t0 = time.time()
for n in range(1, 10):
A = np.random.rand(n, n)
det = np.linalg.slogdet(A)
t = timeit.timeit(lambda: det)
print(t)
But I get the same time for every n, hence, computing complexity: O(N) which is not correct as it is meant to be O(N^3). Any help would be much appreciated.
Upvotes: 4
Views: 824
Reputation: 1
There are some possible reasons:
You'd also typically expect n=1 to have the worst ratio of running time to n, simply because of constant overhead like variable initialization.
Upvotes: 0
Reputation: 117886
For what it's worth, any meaningful benchmarking typically requires sufficiently large N to give the computer something to chew on. A 10x10 matrix is not nearly large enough to start seeing complexity. Start throwing numbers like 100, 1000, 10000, etc, then you'll see your scaling.
For example if I slightly modify your code
for n in range(1, 14):
t0 = time.time()
p = 2**n
A = np.random.rand(p,p)
det = np.linalg.slogdet(A)
print('N={:04d} : {:.2e}s'.format(p, time.time() - t0))
This results in
N=0002 : 4.35e-02s
N=0004 : 0.00e+00s
N=0008 : 0.00e+00s
N=0016 : 5.02e-04s
N=0032 : 0.00e+00s
N=0064 : 5.02e-04s
N=0128 : 5.01e-04s
N=0256 : 1.50e-03s
N=0512 : 8.00e-03s
N=1024 : 3.95e-02s
N=2048 : 2.05e-01s
N=4096 : 1.01e+00s
N=8192 : 7.14e+00s
You can see that for very small values of N
, some small-value optimizations and tricks make it hard to see O()
complexity, but as the values of N
grow, you can start to see the scaling.
Upvotes: 2