Reputation:
I need to figure out time in seconds but the only information I can receive is minutes.
So I can send a request to the server every 5 seconds and I'll get a response back basically saying "Hey you have 10 minutes left", after 5 seconds I can send another request to the server and it'll either say "Hey you still have 10 minutes left" or it'll say "Hey you've got 9 minutes left now."
I thought well I could start a timer once it tells me I have 9 minutes left. But this isn't going to be accurate down to the second due to the fact I'm refreshing every 5 seconds and sometimes the server can take a little longer to send me the information then it did the last time I asked.
So my question basically is there any way I can accurately figure out how many seconds I have left based on the criteria above?
Edit: How often I send the request can be adjusted.
Upvotes: 3
Views: 122
Reputation: 12435
If you watch a clock tick over from 11:43 to 11:44, don't you know roughly what the seconds count is? If as soon as it ticks over to 44 you start a stopwatch, now you can compute what the seconds are from the stopwatch.
If you can query every x seconds, you can know how many seconds are left with an error of roughly x seconds. You need a high-resolution/high-accuracy time source locally, such as the high performance counter in windows, available via the Stopwatch
class or by pinvokes GetPerformanceCounter
and GetPerformanceFrequency
.
So you have the following parts:
GetPerformanceCounter
qualifies, and you might be able to use the Stopwatch
class to access it without writing any pinvokes (but I wrote my own wrapper instead of using Stopwatch
).The general strategy is as follows:
You need to do the above in a loop (and when you do, you can optimize it further).
If your remote clock only ever tells you minutes, then almost every time you query it, R1 and R2 are exactly the same value.
However, if you happened to catch it right as it ticked over to the next minute, R1 and R2 should differ by exactly one minute. If that happened, then the local counters L1 and L2 tell you exactly how much time passed during that entire loop.
If L2 - L1 tells you that only 100 milliseconds passed, then you know that time 'R2 and zero seconds' occurred somewhere in that 100 milliseconds.
So when you're performing the above loop:
Once you have a set of timestamps to work with, you can compute the remote time from just your local high accuracy clock.
Local time (R2 - R1) / 2
(call that value LocalMark) roughly equals "R2 and zero seconds" (call that value RemoteMark).
From there, the remote time computed from the local time is computed as follows:
RemoteTime = (LocalTime - LocalMark) + RemoteMark
LocalTime - LocalMark
tells you how much time has passed since RemoteMark
was taken, so if you add those two parts together, you should have the remote time.
...
The above is my take on what I learned from an MSDN article I found some number of years ago. Unfortunately, Microsoft has retired that article, and it can only be found by the wayback machine:
Implement a Continuously Updating, High-Resolution Time Provider for Windows.
You can also recover the article from the MSDN archives - it was in the March 2004 issue of MSDN magazine, available as a CHM - March 2004 CHM with the article
Upvotes: 0
Reputation: 734
what you can do is relaxation on the offset error: each time you seem to be in advance or late, let loc_error be the estimate ( possibly just +1/-1 if you don't have better). Then let time_offset be 0.99*time_offset + 0.1*loc_error, depending of how "noisy" is the local error information and how fast you want to converge toward the right value.
Upvotes: 1