Reputation: 353
I've recently been reading about high resolution and low resolution timers, and I don't quite understand. How do they operate? Don't they both operate by checking an internal system clock on the computer hardware? If that's the case does that mean the higher resolution one just checks it more frequently, consuming more resources?
Is there anything wrong with me using a low resolution timer and when that timer expires starting a high resolution timer within a second (or sooner) of when the high resolution timer is supposed to stop? Or will this not work for some reason?
Just about how much more resources does a high resolution timer consume in comparison to a low resolution timer?
Upvotes: 4
Views: 3161
Reputation: 129524
Typically, a low resolution timer will fire an interrupt to the processor every x amount of time (say, every 1, 5, 10, 16.6667, 17.81291 or 20ms - examples, there are many other values it could be). This is used by the OS to switch task when a task has been running for "too long" as well as maintaing current time, "timing out" waiting events, and such things (e.g. a sleep
type function call would set itself to "time out" at the end of the sleep time).
A high resolution timer is typically a hardware register that is continually counting [up or down]/updating at a frequency in the high KHz or higher (such as 32768Hz, 36KHz, 12MHz, 2.2GHz - these are just some examples, could be almost anything). Typically, this is used by "reading the current value", and you get whatever the current count is.
This will give high resolution timing values, which is useful for measuring short intervals (e.g. for benchmarking or figuring out how long took for the other server to respond to a ping, or some other small amount of time), but you can't really "wait for 600 ms" using a high resolution timer - at least not without reading said timer a gazillion times, because you can't (in the typical case I describe here) say "tell the OS to start this task again in X number of counts".
(However, sometimes the high resolution timer is simply the same hardware as the low resolution timer, just used in different ways - the difference I describe above is still relevant whether it is one or multiple units providing the different types of timing).
Bear in mind also that if you want "precise timing", the OS needs to play along nicely. Most modern OS's "dislike" tasks that just sit there and slurp CPU-time (e.g. the "Are we there yet" type of polling of the timer), so will give other tasks of short runtime preference. This means that if you have high precision timing requirements, the OS needs to be helping you, not stopping your task from running just as it gets to the right time-step. Without knowing exactly what you are trying to do, it's hard to say for sure what the right solution is in your case, tho'.
Upvotes: 5