Galaxy
Galaxy

Reputation: 873

How to measure time cost of multithreading program on Mac OS using C++?

I'm writing a multithreading program on Mac OS using C++, and I need to measure time cost of it.

Here I found some functions maybe useful to measure time:

  1. clock(): for a multithreading program, it's not accurate.
  2. time(): it can measure time but only have second precision.
  3. gettimeofday(): I wrote a sample to test it, but got wrong time measurement.


#include <stdio.h>
#include <iostream>
#include <sys/time.h>
#include <sys/types.h>
#include <unistd.h>

using namespace std;

int main()
{
    timeval start, finish;
    gettimeofday(&start,NULL);

    sleep(10); //sleep for 10 sec

    gettimeofday(&finish,NULL);

    cout<<(finish.tv_usec-start.tv_usec)/1000.0<<"ms"<<endl;


    return 0;

}

The output is just few ms, I don't know why.

Does anyone know any other function to measure time on Mac OS?

Upvotes: 2

Views: 468

Answers (3)

Rostislav
Rostislav

Reputation: 3977

The best solution is to use the <chrono> functionality (that of course implies that you're using a c++11-enabled compiler).

Then you code might look like:

 #include <iostream>

 #include <chrono>
 #include <thread>

 using namespace std;
 using namespace std::chrono;

 int main()
 {
     auto start = high_resolution_clock::now();

     std::this_thread::sleep_for(10s); //sleep for 10 sec

     cout << duration_cast<milliseconds>(high_resolution_clock::now() - start).count() << "ms\n";

     return 0;

 }

Note that I'm also using neat chrono literals to specify 10s, which is c++14 feature. If it's not available, just use seconds(10) instead.

Note, however, that the output is not guaranteed to be "10000ms" as the OS is allowed to suspend the thread for longer - see here (unistd one also doesn't guarantee that).

Cheers, Rostislav.

Upvotes: 1

Ken Thomases
Ken Thomases

Reputation: 90521

struct timeval has two fields, tv_sec and tv_usec. You're ignoring tv_sec, thereby throwing away the most significant part of the time.

Upvotes: 1

John3136
John3136

Reputation: 29266

Each of the values is looping - when msecs gets to 1000, secs is incremented and msecs is set back to 0. When secs hits 60, mins is incremented and secs gets set back to 0.

If you are trying to measure time this way, you need to take all the fields bigger than your required minimum resolution into account.

Upvotes: 3

Related Questions