Suhrid Mulay
Suhrid Mulay

Reputation: 175

Why is the time changing?

So this is what I did. First we have my simple insertion sort code. I decide to time it. this is what the code looks like:

#include<stdio.h>
#include<time.h>

int insertion_sort(int arr[], int len){
    int i;
    int key;
    for(i = 1; i < len; i++){
        key = arr[i];
        int j = i - 1;
        while(arr[j] >= key && j >= 0){
            arr[j + 1] = arr[j];
            j--;
        }
    arr[j + 1] = key;
    }
}


int main(){
    int arr[5] = {3, 2, 5, 1, 4};
    clock_t beg = clock();
    insertion_sort(arr, 5);
    clock_t end = clock();
    int i;
    for(i = 0; i < 5; i++)
    {
        printf("%d\n", arr[i]);
    }
    double deltaT = (double)(end - beg)/CLOCKS_PER_SEC;
    printf("Time take is: %lf seconds\n", deltaT); 
}

So next I compile and run my code. The output is:

1
2
3
4
5
Time take is: 0.000002 seconds

Then i decided that the seconds timer is too small and we would need to use milliseconds instead. So I multiply the calculation by thousand like:

deltaT = (end - beg) * 1000 / CLOCKS_PER_SEC

and then changed the relevant printf. Time reading still remains the same at 2μs. The real magic happens when I re-remove the 1000 and revert the code back to the old form.

This time the time changes miraculously to a rather slow 3μs. What is the reason for this abrupt change. We expect our machines to work the same under same inputs but why does the performance vary on the same input when I try it?

Upvotes: 0

Views: 94

Answers (2)

Luis Colorado
Luis Colorado

Reputation: 12708

Think twice when you try to measure time in clock ticks. The answer is an integer number of clock ticks that depend on the architecture. This means that sometimes you can get one clock tick (that translated to microseconds can mean something) and sometimes two (doubling the original time).

better use clock_gettime(2) or gettimeofday(2) (the former has nanosecond resolution, while precission can be lower, and the later has microsecond resolution) Anycase you don't have 1.0E6 or 1.0E9 tics per second.

Upvotes: 0

Elias
Elias

Reputation: 923

The computation you try to measure is much too fast for you to get any meaningful timing measurements in that way.

Try this:

#include<stdio.h>
#include<time.h>

int main(){
    clock_t beg = clock();
    clock_t end = clock();
    double deltaT = (double)(end - beg)/CLOCKS_PER_SEC;
    printf("Time take is: %lf seconds\n", deltaT); 
}

Then you will probably find that you get similar "Time taken" output, even though nothing at all is done between the two clock() calls. What you are seeing is just a result of the significant overhead of calling the clock() function.

To get meaningful measurements of the performance of your sort function, either sort a much larger list, or repeat the sort many times in a loop, so that you get long enough times to measure.

To answer the question about why the time was changing, I think the answer to that is just that it changed a little bit randomly, run a few more times and you may get the other value again.

Upvotes: 1

Related Questions