Reputation: 191
Here it says that sleep_for
"Blocks the execution of the current thread for at least the specified sleep_duration."
Here it says that sleep_until
"Blocks the execution of the current thread until specified sleep_time has been reached."
So with this in mind I was simply doing my thing, until I noticed that my code was sleeping a lot shorter than the specified time. To make sure it was the sleep_ code being odd instead off me making dumb code again, I created this online example: https://ideone.com/9a9MrC (code block below the edit line) When running the online example, it does exactly what it should be doing, but running the exact same code sample on my machine gives me this output: Output on Pastebin
Now I'm truly confused and wondering what the bleep is going wrong on my machine. I'm using Code::Blocks as IDE on a Win7 x64 machine in combination with This toolchain containing GCC 4.8.2.
*I have tried This toolchain before the current one, but this one with GCC 4.8.0 strangely enough wasn't even able to compile the example code.
What could create this weird behaviour? My machine? Windows? GCC? Something else in the toolchain?
p.s. The example also works as it should on Here, which states that it uses GCC version 4.7.2
p.p.s. using #include <windows.h>
and Sleep( 1 );
also sleeps a lot shorter than the specified 1 millisecond on my machine.
EDIT: code from example:
#include <iostream> // std::cout, std::fixed
#include <iomanip> // std::setprecision
//#include <string> // std::string
#include <chrono> // C++11 // std::chrono::steady_clock
#include <thread> // C++11 // std::this_thread
std::chrono::steady_clock timer;
auto startTime = timer.now();
auto endTime = timer.now();
auto sleepUntilTime = timer.now();
int main() {
for( int i = 0; i < 10; ++i ) {
startTime = timer.now();
sleepUntilTime = startTime + std::chrono::nanoseconds( 1000000 );
std::this_thread::sleep_until( sleepUntilTime );
endTime = timer.now();
std::cout << "Start time: " << std::chrono::duration_cast<std::chrono::nanoseconds>( startTime.time_since_epoch() ).count() << "\n";
std::cout << "End time: " << std::chrono::duration_cast<std::chrono::nanoseconds>( endTime.time_since_epoch() ).count() << "\n";
std::cout << "Sleep till: " << std::chrono::duration_cast<std::chrono::nanoseconds>( sleepUntilTime.time_since_epoch() ).count() << "\n";
std::cout << "It took: " << std::chrono::duration_cast<std::chrono::nanoseconds>( endTime - startTime ).count() << " nanoseconds. \n";
std::streamsize prec = std::cout.precision();
std::cout << std::fixed << std::setprecision(9);
std::cout << "It took: " << ( (float) std::chrono::duration_cast<std::chrono::nanoseconds>( endTime - startTime ).count() / 1000000 ) << " milliseconds. \n";
std::cout << std::setprecision( prec );
}
std::cout << "\n\n";
for( int i = 0; i < 10; ++i ) {
startTime = timer.now();
std::this_thread::sleep_for( std::chrono::nanoseconds( 1000000 ) );
endTime = timer.now();
std::cout << "Start time: " << std::chrono::duration_cast<std::chrono::nanoseconds>( startTime.time_since_epoch() ).count() << "\n";
std::cout << "End time: " << std::chrono::duration_cast<std::chrono::nanoseconds>( endTime.time_since_epoch() ).count() << "\n";
std::cout << "It took: " << std::chrono::duration_cast<std::chrono::nanoseconds>( endTime - startTime ).count() << " nanoseconds. \n";
std::streamsize prec = std::cout.precision();
std::cout << std::fixed << std::setprecision(9);
std::cout << "It took: " << ( (float) std::chrono::duration_cast<std::chrono::nanoseconds>( endTime - startTime ).count() / 1000000 ) << " milliseconds. \n";
std::cout << std::setprecision( prec );
}
return 0;
}
Upvotes: 3
Views: 1330
Reputation: 70126
Nothing is wrong with your machine, it is your assumptions that are wrong.
Sleeping is a very system-dependent and unreliable thing. Generally, on most operating systems, you have a more-or-less-guarantee that a call to sleep will delay execution for at least the time you ask for. The C++ thread library necessarily uses the facilities provided by the operating system, hence the wording in the C++ standard that you quoted.
You will have noted the wording "more-or-less-guarantee" in the above paragraph. First of all, the way sleeping works is not what you might think. It generally does not block until a timer fires and then resumes execution. Instead, it merely marks the thread as "not ready", and additionally does something so this can be undone later (what exactly this is isn't defined, it might be setting a timer or something else).
When the time is up, the operating system will set the thread to "ready to run" again. This doesn't mean it will run, it only means it is a candidate to run, whenever the OS can be bothered and whenever a CPU core is free (and nobobdy with higher priority wants it).
On traditional non-tickless operating systems, this will mean that the thread will probably (or more precisely, maybe) run at the next scheduler tick. That is, if CPU is available at all. On more modern operating systems (Linux 3.x or Windows 8) which are "tickless", you're a bit closer to reality, but you still do not have any hard guarantees.
Further, under Unix-like systems, sleep
may be interrupted by a signal and may actually wait less than the specified time. Also, under Windows, the interval at which the scheduler runs is configurable, and to make it worse, different Windows versions behave differently [1] [2] in respect of whether they round the sleep time up or down.
Your system (Windows 7) rounds down, so indeed yes, you may actually wait less than what you expected.
Sleep is unreliable and only a very rough "hint" (not a requirement) that you wish to pass control back to the operating system for some time.
Upvotes: 2