Reputation: 375
I am trying to learn how to use clock(). Here is a piece of code that i have
int main()
{
srand(time(NULL));
clock_t t;
int num[100000];
int total=0;
t=clock();
cout<<"tick:"<<t<<endl;
for (int i=0;i<100000;i++)
{
num[i]=rand();
//cout<<num[i]<<endl;
}
for(int j=0;j<100000;j++)
{
total+=num[j];
}
t=clock();
cout<<"total:"<<total<<endl;
cout<<"ticks after loop:"<<t<<endl;
//std::cout<<"The number of ticks for the loop to caluclate total:"<<t<<"\t time is seconds:"<<((float)t)/CLOCKS_PER_SEC<<endl;
cin.get();
}
The result that i get is in below image. I don't understand why the tick count are same even though there are two big loops in between.
Upvotes: 2
Views: 2085
Reputation: 7939
The clock()
function has a finite resolution. On VC2013 it is once per millisec. (Your system may vary). If you call clock()
twice in the same millisecond (or whatever) you get the same value.
in <ctime>
there is a constant CLOCKS_PER_SEC
which tells you how many ticks per second. For VC2012 that is 1000.
** Update 1 **
You said you're in Windows. Here's some Win-specific code that gets higher resolution time. If I get time I'll try to do something portable.
#include <iostream>
#include <vector>
#include <ctime>
#include <Windows.h>
int main()
{
::srand(::time(NULL));
FILETIME ftStart, ftEnd;
const int nMax = 1000*1000;
std::vector<unsigned> vBuff(nMax);
int nTotal=0;
::GetSystemTimeAsFileTime(&ftStart);
for (int i=0;i<nMax;i++)
{
vBuff[i]=rand();
}
for(int j=0;j<nMax;j++)
{
nTotal+=vBuff[j];
}
::GetSystemTimeAsFileTime(&ftEnd);
double dElapsed = (ftEnd.dwLowDateTime - ftStart.dwLowDateTime) / 10000.0;
std::cout << "Elapsed time = " << dElapsed << " millisec\n";
return 0;
}
** Update 2 ** Ok, here's the portable version.
#include <iostream>
#include <vector>
#include <ctime>
#include <chrono>
// abbreviations to avoid long lines
typedef std::chrono::high_resolution_clock Clock_t;
typedef std::chrono::time_point<Clock_t> TimePoint_t;
typedef std::chrono::microseconds usec;
uint64_t ToUsec(Clock_t::duration t)
{
return std::chrono::duration_cast<usec>(t).count();
}
int main()
{
::srand(static_cast<unsigned>(::time(nullptr)));
const int nMax = 1000*1000;
std::vector<unsigned> vBuff(nMax);
int nTotal=0;
TimePoint_t tStart(Clock_t::now());
for (int i=0;i<nMax;i++)
{
vBuff[i]=rand();
}
for(int j=0;j<nMax;j++)
{
nTotal+=vBuff[j];
}
TimePoint_t tEnd(Clock_t::now());
uint64_t nMicroSec = ToUsec(tEnd - tStart);
std::cout << "Elapsed time = "
<< nMicroSec / 1000.0
<< " millisec\n";
return 0;
}
Upvotes: 3
Reputation: 121599
Strong suggestion:
Run the same benchmark, but try multiple, alternative methods. For example:
Etc.
The problem with (Posix-compliant) "clock()" is that it isn't necessarily accurate enough for meanintful benchmarks, dependent on your compiler library/platform.
Upvotes: 2
Reputation: 1
Time has limited accuracy (perhaps only several milliseconds)... And on Linux clock
has been slightly improved in very recent libc
. At last, your loop is too small (a typical elementary C instruction runs in less than a few nanoseconds). Make it bigger, e.g. do it a billion times. But then you should declare static int num[1000000000];
to avoid eating too much stack space.
Upvotes: 0