Reputation: 16782
We are running some benchmarks and we are noticing that function foo
which calls function bar
and waits for a response is taking less time than function bar
. Foo resides on the client, which is a separate physical machine and bar resides on the server which is a separate machine, and communication is through a TCP server. We are using <ctime>
and we are calling clock()
like so.
/*
* resides on client machine
*/
foo ()
{
clock_t start,stop;
start = clock();
/*
* Blocking TCP call to bar() on server
* Uses callback functions
*/
stop = clock();
std::cout<<stop-start;
}
/*
* Function on server which executes some could and returns
* it's run time to the client machine
*/
bar ()
{
clock_t start,stop;
start = clock();
/*connect to international space station and back*/
stop = clock();
std::cout<<stop-start;
}
foo();
bar();
foo() - bar();
The output we are getting is something like
100000
200000
-100000
We suspect that the issue is that one CPU's sense of time is faster than the other's and is causing this mismatch. What is the best way to get meaningful time metrics (ie. based on human time, not abstract CPU speed).
Upvotes: 1
Views: 239
Reputation: 1844
You very well may be comparing apples to oranges. There is no guarantee that your two machines are using the same resolution for clock()
. Perhaps the more useful thing would be to ensure that your timing numbers are in consistent units, like seconds.
std::cout << (stop-start)/CLOCKS_PER_SEC;
Or, you might want to try using Modern C++ and <chrono>
using std::chrono;
auto start = high_resolution_clock::now();
...
auto stop = high_resolution_clock::now();
std::cout << duration_cast< duration<float> >(stop-start).count()
<< " seconds" << std::endl;
(although it's worth noting that high_resolution_clock
may be implemented as system_clock
; steady_clock
is specifically designed for timing such as you are trying to do. There may also be non-portable solutions with even greater resolution that are hardware/OS-specific.)
Upvotes: 1