Reputation: 27
I have been trying to calculate the time complexity of this algorithm but I don't think that I am right.
void unknownRuntime( FILE* input )
{
int temp;
int size;
int n;
int i=0;
int *array;
if( fscanf( input, "%d", &size ) != 1 )
{
exit(-1);
}
array = (int *) malloc( size*sizeof(int) );
if( array == NULL )
{
exit(-1);
}
while( fscanf( input, "%d", &temp ) == 1 && i<size)
{
array[i] = temp;
i++;
}
for( i=0; i<size; i++ )
{
for( n=size-1; n>1; n/=1.01 )
{
array[n-1] = array[n];
}
}
free(array);
}
I developed a program that tests different array sizes to establish a relationship between input size and runtimes of the given mystery function and here were my results:
From the running times that I plotted, it appears to be O(N log N), but I'm not sure if I can see where the algorithm takes O(1) time to cut the problem size by a fraction. Any help would be appreciated.
Upvotes: 0
Views: 131
Reputation: 385789
Let's look at the inner loop. And let's pretend we're dealing with real numbers (as opposed to type-constrained numbers).
i=0: n = (size-1)
i=1: n = (size-1)/1.01
i=2: n = (size-1)/1.01/1.01
i=3: n = (size-1)/1.01/1.01/1.01
...
We could write that as
n = (size-1) / ( 1.01 ^ i ) // "^" is being used to denote exponentiation.
The loop will therefore stop when
1.01 ^ i >= size - 1
which is to say when
i >= log[1.01](size - 1) // log[base](x)
The inner loop is therefore O(log N), so the whole is O(N log N).[1]
Now, the above was assuming real numbers. What the algorithm is really doing is
i=0: n = (size-1)
i=1: n = floor( (size-1) / 1.01 )
i=2: n = floor( floor( (size-1) / 1.01 ) / 1.01 )
i=3: n = floor( floor( floor( (size-1) / 1.01 ) / 1.01 ) / 1.01 )
...
Where floor
represents the truncation caused from the implicit cast to int
. (There could also be some error from limited floating-point precision not represented above.)
n
decreases a little bit faster in this progression than the earlier progression, so this progression will approach 1
faster, but the difference is minor, and the number of step will still be O(log N).
log
usually refers to base 2 logarithms in Computer Science, but it's irrelevant here because we ignore constant factors in Big-O notation, and you can convert logarithms from one base to another by multiplying by a constant factor.
For example, log[1.01](x) = log[2](x) * 1/log[1.01](2)
.
Upvotes: 1