Reputation: 5124
I'm monitoring a thread using GetThreadTimes
every 5 microsec (more or less)
That thread is "Sleeping
" for 1 minute but for some reason sometimes the "User Time" I get from GetThreadTimes
changes, even though the thread is still in sleep mode.
Kernel time is always 0.
Does anybody know why it happens?
thanks :)
Upvotes: 2
Views: 2463
Reputation: 71
5 Microseconds?!
GetThreadTimes()
measures the number of quanta a thread has spent in sleep, user / kernel mode. I have observed a typical scheduler quantum of 10-15ms on Win32. Below one quantum, you will find the times GetThreadTimes()
reports do not change -- it basically just multiplies the (integer) number of elapsed quanta (in each state) by the duration of one quantum.
Below 10-15 ms, you really cannot expect any of the values returned by GetThreadTimes()
to be accurate. VERY frequently, you will only see one of the three measured values updated at a time; exactly as you discussed.
Both Linux and Win32 have quirky thread runtime reporting behavior at the scale you discuss. The only operating systems I have found that can truly measure scheduled execution time down to 5 microseconds are Mac OS X and VxWorks - QNX probably has this capability as well.
Upvotes: 7
Reputation: 23614
It is not complete answer, but anyway here is at least 2 reasons:
There is well known problem "Thread priority inversion", for details see link, but very useful technique to resolve problem is random boosting threads. So, there is possibility that thread gets some processor time.
Thread sleeping can be implemented as a loop with periodically checking of time expiration condition.
Upvotes: 2
Reputation: 7744
From MSDN: Thread creation and exit times are points in time expressed as the amount of time that has elapsed since midnight on January 1, 1601 at Greenwich, England. There are several functions that an application can use to convert such values to more generally useful forms; see Time Functions.
So it could show times when you left the thread.
Upvotes: 0