Reputation: 1
I wanted to try and do a C implementation but the results were strange because, when it came to the mergesort algorithm, given the length of the array (even if the its elements where pseudo-random thanks to rand()). Every time it finished running the complexity was actually the same. I know, trying to understand the "problem" this way is not easy so this is the code that I've been written/copied from the internet:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#define DIM 1000000
void merge(int*, int, int, int);
void mergesort(int*, int, int);
int complexity=0; //Complexity is the variable that counts how many times the program goes through the relevant cycles
int main(int argc, char *argv[]){
int v[DIM], i;
time_t t;
srand((unsigned)time(&t));
for(i=0; i<DIM; i++){
v[i]=rand()%100;
printf("%d\n", v[i]);
}
mergesort(v, 0, DIM-1);
for(i=0; i<DIM; i++)
printf("%d\t", v[i]);
printf("\n");
printf("iterations: %d\n", complexity);
return 0;
}
void mergesort(int v[], int lo, int hi){
int mid;
if(lo<hi){
mid=(lo+hi)/2;
mergesort(v, lo, mid);
mergesort(v, mid+1, hi);
merge(v, lo, mid, hi);
}
return;
}
//This implementation is not actually mine, I got it from a site because I couldn't figure out why mine wouldn't run and order the input
void merge(int v[], int p, int q, int r) {
int n1, n2, i, j, k, *L, *M;
n1 = q - p + 1;
n2 = r - q;
//creation and initialization of the left and right vectors
L=(int*)malloc(n1*sizeof(int));
M=(int*)malloc(n2*sizeof(int));
for (i = 0; i < n1; i++){
L[i] = v[p + i];
complexity++;
}
for (j = 0; j < n2; j++){
M[j] = v[q + 1 + j];
complexity++;
}
//merging section
i = 0;
j = 0;
k = p;
while (i < n1 && j < n2) {
if (L[i] <= M[j]) {
v[k] = L[i];
i++;
complexity++;
} else {
v[k] = M[j];
j++;
complexity++;
}
k++;
}
//from what I understood this should be the section where what is left out gets copied inside the remaining spots
while (i < n1) {
v[k] = L[i];
i++;
k++;
complexity++;
}
while (j < n2) {
v[k] = M[j];
j++;
k++;
complexity++;
}
return;
}
I'll leave an image to a few trials I did with various sorting algorithms down here as well
Here is the question: is it normal to have the variable that counts temporal complexity constant? My initial thoughts were that it was due to bad implementation of the counter, I have no idea of how to prove it though simply because my knowledge of the algorithm is not that strong. If that ends up being the correct answer can you direct me towards an implementation (not too complicated but still functional) of the counter to evaluate temporal complexity more precisely?
Edit: The columns A to I of the excel screenshot that I uploaded correspond to the length of the randomly generated array the values are: 100, 500, 1000, 5000, 10000, 50000, 1000000.
Upvotes: 0
Views: 67
Reputation: 58261
No matter the contents of the arrays, the merge
function increments complexity
by 2(r-p+1)
. Since the merge
function is the only part of the code that depends on the contents of the array, this shows that the complexity
variable is incremented a fixed number of times overall.
Here's a sketch of why the merge
function increments complexity
by 2(r-p+1)
:
This block increments it by n1
:
for (i = 0; i < n1; i++){
L[i] = v[p + i];
complexity++;
}
This block by n2
:
for (j = 0; j < n2; j++){
M[j] = v[q + 1 + j];
complexity++;
}
In the remaining code, i
and j
start at 0, and at each step either i
or j
is incremented by 1 and complexity
increased. When the function returns, i
is n1
and j
is n2
, so this adds another n1+n2
to complexity
.
Since n1 = q-p+1
and n2 = r-q
, overall the merge
function increments complexity by 2*n1 + 2*n2 = 2(q-p+1+r-q) = 2(r-p+1)
.
Upvotes: 1