Reputation:
I wanted to ask that how can I calculate time in any units like picosecond, femtosecond and up to more precision. I am calculating running times for functions and using nanoseconds, the running time of functions is returning 0 when i use millisecond or nanosecond. I think Chrono library supports only till nanosecond, It was the most precise which appeared when I pressed ctrl+space after typing chrono::
int main()
{
auto t1 = std::chrono::high_resolution_clock::now();
f();
auto t2 = std::chrono::high_resolution_clock::now();
std::cout << "f() took "
<< std::chrono::duration_cast<std::chrono::milliseconds>(t2 - t1).count()
<< " milliseconds\n";
}
code source: http://en.cppreference.com/w/cpp/chrono/duration/duration_cast
Thanks.
Upvotes: 7
Views: 13603
Reputation: 1451
You can calculate time in more precise durations (picoseconds...)
www.stroustrup.com/C++11FAQ.html
See the following definition:
typedef ratio<1, 1000000000000> pico;
then use:
duration<double, pico> d{1.23}; //...1.23 picoseconds
UPDATE Your question has three parts:
Above I partially answered only for the first question (how to define "picoseconds" duration). Consider the following code as example:
#include <chrono>
#include <iostream>
#include <typeinfo>
using namespace std;
using namespace std::chrono;
void f()
{
enum { max_count = 10000 };
cout << '|';
for(volatile size_t count = 0; count != max_count; ++count)
{
if(!(count % (max_count / 10)))
cout << '.';
}
cout << "|\n";
}
int main()
{
typedef std::ratio<1l, 1000000000000l> pico;
typedef duration<long long, pico> picoseconds;
typedef std::ratio<1l, 1000000l> micro;
typedef duration<long long, micro> microseconds;
const auto t1 = high_resolution_clock::now();
enum { number_of_test_cycles = 10 };
for(size_t count = 0; count != number_of_test_cycles; ++count)
{
f();
}
const auto t2 = high_resolution_clock::now();
cout << number_of_test_cycles << " times f() took:\n"
<< duration_cast<milliseconds>(t2 - t1).count() << " milliseconds\n"
<< duration_cast<microseconds>(t2 - t1).count() << " microseconds\n"
<< duration_cast<picoseconds>(t2 - t1).count() << " picoseconds\n";
}
It produces this output:
$ ./test
|..........|
|..........|
|..........|
|..........|
|..........|
|..........|
|..........|
|..........|
|..........|
|..........|
10 times f() took:
1 milliseconds
1084 microseconds
1084000000 picoseconds
As you see in order to get 1 millisecond result I had to repeat f() 10 times. Repeating your test is general approach when your timer doesn't have enough precision. There is one problem associated with repetition - it's not neccessary that repeating your test N times takes proportianal period of time. You need to prove it first.
Another thing - although I can make calculation using picoseconds durations my high_resolution_timer can't give me higher precision than microseconds.
To get higher precision you can use timestamp counter, see wiki/Time_Stamp_Counter - but this is tricky and platform specific.
Upvotes: 11
Reputation: 4868
A "standard" PC has a resolution of around 100 nanoseconds, so trying to measure time at resolutions greater than that is not really possible unless you have custom hardware of some kind. See How precise is the internal clock of a modern PC? for a related question and check out the second answer: https://stackoverflow.com/a/2615977/1169863.
Upvotes: 6