Reputation: 962
As a non computer scientist I find it a bit hard to understand time complexity and the way it is calculated, So my question is if it is possible for the time complexity of a certain algorithm / program be derived from running it on increasingly large input data, and then look at how runtime changes relative to the increase in the input size n
.
I'm asking this because I wrote an algorithm in C++ which basically does pixel shading on 2D images using a single cpu core and a single thread (3GHZ processor). I measured the runtime on input sizes from 2^4
up to 2^30
which is a matrix of size 32,768 ** 2
. Now I have this plot of how my runtime behaves as a function of my input size n
:
So for the input size of n = 2^4 to 2^30
the exact runtimes were(by row):
[1] 0.000 0.000 0.000 0.000 0.000 0.000 0.000
[8] 0.000 0.000 0.000 0.000 0.001 0.000 0.000
[15] 0.002 0.004 0.013 0.018 0.053 0.079 0.231
[22] 0.358 0.963 1.772 4.626 9.713 25.582
Now this is a bit strange, because when The power of 2 changes from an Odd to an even, the runtime is increased by just 1.5, but when it changes from even to an Odd, the runtime triples. So, when I double my input, my runtime increases by an average multiple of (3 + 1.5) / 2 = 2.25
. In fact, it seems as though when n becomes arbitrarily large, both changes in the power argument from Odd to even
and even to Odd
cause runtime to be multiplied by a constant of 2.25, in other words: as n becomes larger, the runtime multiplier converges to 2.25.
If my algorithm is quite complex, is there a way to say something about its time complexity from this analysis?
Upvotes: 4
Views: 292
Reputation: 2004
Having C(4 * n) = (1.5 * 3) * C(n)
suggest that you have a complexity in O(n^1.08)
--
where 1.08 ~ log(4.5)/log(4)
.
Of course it's just a hint and we can prove nothing asymptotically.
Upvotes: 3
Reputation: 28312
I think it's perfectly reasonable for a lot of algorithms to figure out a curve that fits the data well, and then use the expression for that curve as a working complexity. For this to work well you might want to throw out "small" input sizes for your algorithm and focus on larger ones to minimize the effect of non-asymptotic overhead.
For instance, we can tell it most likely grows faster than quadratic since solving the constants for f(30) and then calculating what we'd expect for f(20) gives a number that is much too large, implying our function asymptotically is growing much faster than quadratically. If we assume an exponential function and solve for the constants at f(30) we get an expected value for f(20) that is much closer to the actual number (though a little lower, so our function might be growing a little more slowly than A*2^n … but we could probably introduce a new factor B to find A*2^(Bn) and get a little closer).
It's not a valid or accepted way to calculate the theoretical asymptotic time complexity of the function whose graph you're looking at, but I think it's fine to say based on this graph your asymptotic complexity is likely exponential with a base around 2.
Actually it looks like your function is doubling values and tripling them alternately. If this is really the case, then you'd expect every two increments in n to yield a six-fold increase; the square root of 6 is about 2.45 so your function really might be an exponential like A*2.45^n, or at least that might give a better fit than using base 2.
Upvotes: 2