Ugluk
Ugluk

Reputation: 77

clock() returns 0

When I run my code below I get a value 0, a few times I did get a value for the intAddition. I've tried many suggestions I found online, but have yet to prevail. My classmate showed me how he did his and it was very similar to mine. He was getting small values, 1 to 3, from his program.

Thanks for the help!

#include <iostream>
#include <time.h>
#include <stdio.h>

clock_t start, end;

void intAddition(int a, int b){
    start = clock();
    a + b;
    end = clock();  
    printf("CPU cycles to execute integer addition operation: %d\n", end-start);
}

void intMult(int a, int b){
    start = clock();
    a * b;
    end = clock();
    printf("CPU cycles to execute integer multiplication operation: %d\n", end-start);
}

void floatAddition(float a, float b){
    start = clock();
    a + b;
    end = clock();
    printf("CPU cycles to execute float addition operation: %d\n", end-start);
}

void floatMult(float a, float b){
    start = clock();
    a * b;
    end = clock();
    printf("CPU cycles to execute float multiplication operation: %d\n", end-start);
}

int main()
{
    int a,b;
    float c,d;

    a = 3, b = 6;
    c = 3.7534, d = 6.85464;

    intAddition(a,b);
    intMult(a,b);
    floatAddition(c,d);
    floatMult(c,d);

    return 0;
}

Upvotes: 3

Views: 5348

Answers (2)

Keith Thompson
Keith Thompson

Reputation: 263617

The value returned by clock() is of type clock_t (an implementation-defined arithmetic type). It represents "implementation’s best approximation to the processor time used by the program since the beginning of an implementation-defined era related only to the program invocation" (N1570 7.27.2.1).

Given a clock_t value, you can determine the number of seconds it represents by multiplying it by CLOCKS_PER_SEC, an implementation-defined macro defined in <time.h>. POSIX requires CLOCKS_PER_SEC to be one million, but it may have different values on different systems.

Note in particular, that the value of CLOCKS_PER_SEC does not necessarily correspond to the actual precision of the clock() function.

Depending on the implementation, two successive calls to clock() might return the same value if the amount of CPU time consumed is less than the precision of the clock() function. On one system I tested, the resolution of the clock() function is 0.01 second; the CPU can execute a lot of instructions in that time.

Here's a test program:

#include <stdio.h>
#include <time.h>
#include <limits.h>
int main(void) {
    long count = 0;
    clock_t c0 = clock(), c1;
    while ((c1 = clock()) == c0) {
        count ++;
    }
    printf("c0 = %ld, c1 = %ld, count = %ld\n",
           (long)c0, (long)c1, count);
    printf("clock_t is a %d-bit ", (int)sizeof (clock_t) * CHAR_BIT);
    if ((clock_t)-1 > (clock_t)0) {
        puts("unsigned integer type");
    }
    else if ((clock_t)1 / 2 == 0) {
        puts("signed integer type");
    }
    else {
        puts("floating-point type");
    }
    printf("CLOCKS_PER_SEC = %ld\n", (long)CLOCKS_PER_SEC);
    return 0;
}

On one system (Linux x86_64), the output is:

c0 = 831, c1 = 833, count = 0
clock_t is a 64-bit signed integer type
CLOCKS_PER_SEC = 1000000

Apparently on that system the clock() function's actual resolution is one or two microseconds, and two successive calls to clock() return distinct values.

On another system (Solaris SPARC), the output is:

c0 = 0, c1 = 10000, count = 10447
clock_t is a 32-bit signed integer type
CLOCKS_PER_SEC = 1000000

On that system, the resolution of the clock() function is 0.01 second (10,000 microseconds), and the value returned by clock() did not change for several thousand iterations.

There's (at least) one more thing to watch out for. On a system where clock_t is 32 bits, with CLOCKS_PER_SEC == 1000000, the value can wrap around after about 72 minutes of CPU time, which could be significant for long-running programs. Consult your system's documentation for the details.

Upvotes: 6

Barmak Shemirani
Barmak Shemirani

Reputation: 31659

On some compilers clock() measures time in millisecond. Also the compiler is too smart for simple tests, it may skip everything because the result of those operations is not being used.

For example, this loop will probably take less than 1 millisecond (unless debugger is on or optimization is off)

int R = 1;
int a = 2;
int b = 3;
start = clock();
for( int i = 0; i < 10000000; i++ ) 
    R = a * b;
printf( "time passed: %ld\n", clock() - start );

R is always the same number (6), and R is not even being used. The compiler may skip all the calculations. You have to print R at the end or do something else to fool the compiler to cooperate with the test.

Upvotes: 3

Related Questions