Soroush Rabiei
Soroush Rabiei

Reputation: 10868

Wait in C++0x multithreading

I'm playing around new c++ standard. I write a test to observe behavior of scheduling algorithms and see what's happening with threads. Considering context switch time, I expected real waiting time for a specific thread to be a bit more than value specified by std::this_thread::sleep_for() function. But surprisingly it's sometimes even less than sleep time! I can't figure out why this happens, or what I'm doing wrong...

#include <iostream>
#include <thread>
#include <random>
#include <vector>
#include <functional>
#include <math.h>
#include <unistd.h>
#include <sys/time.h>

void heavy_job()
{
    // here we're doing some kind of time-consuming job..
    int j=0;
    while(j<1000)
    {
        int* a=new int[100];
        for(int i=0; i<100; ++i)
            a[i] = i;
        delete[] a;
        for(double x=0;x<10000;x+=0.1)
            sqrt(x);
        ++j;
    }
    std::cout << "heavy job finished" << std::endl;
}

void light_job(const std::vector<int>& wait)
{
    struct timeval start, end;
    long utime, seconds, useconds;
    std::cout << std::showpos;
    for(std::vector<int>::const_iterator i = wait.begin();
        i!=wait.end();++i)
    {
        gettimeofday(&start, NULL);
        std::this_thread::sleep_for(std::chrono::microseconds(*i));
        gettimeofday(&end, NULL);
        seconds  = end.tv_sec  - start.tv_sec;
        useconds = end.tv_usec - start.tv_usec;
        utime = ((seconds) * 1000 + useconds/1000.0);
        double delay = *i - utime*1000;
        std::cout << "delay: " << delay/1000.0 << std::endl;
    }
}

int main()
{
    std::vector<int> wait_times;
    std::uniform_int_distribution<unsigned int> unif;
    std::random_device rd;
    std::mt19937 engine(rd());
    std::function<unsigned int()> rnd = std::bind(unif, engine);
    for(int i=0;i<1000;++i)
        wait_times.push_back(rnd()%100000+1); // random sleep time between 1 and 1 million µs
    std::thread heavy(heavy_job);
    std::thread light(light_job,wait_times);
    light.join();
    heavy.join();
    return 0;
}

Output on my Intel Core-i5 machine:

.....
delay: +0.713
delay: +0.509
delay: -0.008  // !
delay: -0.043  // !!
delay: +0.409
delay: +0.202
delay: +0.077
delay: -0.027  // ?
delay: +0.108
delay: +0.71
delay: +0.498
delay: +0.239
delay: +0.838
delay: -0.017  // also !
delay: +0.157

Upvotes: 2

Views: 456

Answers (1)

Derek Park
Derek Park

Reputation: 46836

Your timing code is causing integral truncation.

utime = ((seconds) * 1000 + useconds/1000.0);
double delay = *i - utime*1000;

Suppose your wait time was 888888 microseconds and you sleep for exactly that amount. seconds will be 0 and useconds will be 888888. After dividing by 1000.0, you get 888.888. Then you add 0*1000, still yielding 888.888. That then gets assigned to a long, leaving you with 888, and an apparent delay of 888.888 - 888 = 0.888.

You should update utime to actually store microseconds so that you don't get the truncation, and also because the name implies that the unit is in microseconds, just like useconds. Something like:

long utime = seconds * 1000000 + useconds;

You've also got your delay calculation backwards. Ignoring the effects of the truncation, it should be:

double delay = utime*1000 - *i;
std::cout << "delay: " << delay/1000.0 << std::endl;

The way you've got it, all the positive delays you're outputting are actually the result of the truncation, and the negative ones represent actual delays.

Upvotes: 3

Related Questions