Reputation: 4778
I have a server-client application where the general purpose of the application is to have one client connect to multiple servers simultaneously. The client makes a request that each server begins recording data locally at the server starting at some given DateTime
for some TimeSpan
. After making the request, the client disconnects from the servers. Then, some time later, the client reconnects to the servers to retrieve the recorded data. In order for the client to properly parse the recorded data, the time reference between all of the servers must match (to within some reasonable degree of accuracy, say 1 second).
So I need to be able to achieve synchronized time across all the servers as referenced by that client during the recording period.
I can't guarantee that the servers are online (they might be on a local LAN only), therefore I can't have them poll some internet database for UTC time. I can't guarantee that the client will stay connected during the entire time of the recording, therefore I can't have the client continually sending out it's time as a reference.
I suppose this only leaves one option: the client must tell each server what it thinks the time is when it sets up the recording and the servers must figure out the delta between that time and what they think the time is. Then the servers must apply that delta to any recorded data.
I have concerns over this method. Will it keep accurate time on the servers over long periods of time, say 10 days? Could the delta drift over time if one of the servers is unable to keep good time?
Or, is there an even better way to do this?
Upvotes: 0
Views: 421
Reputation: 11
You are mainly concerned with internal consistency, rather than absolute time, so:
Upvotes: 1
Reputation: 46040
If I understand your task correctly, you need to measure relative time from the start of the request to the end of data collection, i.e. you must know when exactly certain piece of data was collected. To address this task your servers need to track only relative time of their activity (on Windows this can be done using GetTickCount function, on unix there exist similar functions) and record the relative time since beginning of data collection. On the client you can just add the absolute time when the client sent data collection requests (and you can store this time on the client as well).
Upvotes: 1
Reputation: 5194
I can't have them poll some internet database for UTC time... However, even only having access to internet UTC (e.g. NTP server) from time to time will allow to get an estimate of how the local time progresses compared to the internet UTC. This can basically be used to predict the offset
over some period of time. Typical modern hardware shows drift rates of 5 to 20 microseconds per second. Some implementation effort allows getting the drift rate to an accuracy of about 1 microsecond/second. Consequently, the remaining "inaccuracy" of 1 us/s will produce an error of 3.6 ms/hour or ~90 ms/day.
Your requirements to within some reasonable degree of accuracy, say 1 second and over long periods of time, say 10 days can be fulfilled with such a scheme, a deviation of 1 us/s accumulates to roughly 900 ms/10 days.
How to do it: (A very basic description)
This could alternatively also be done on the client side. The client receives not just the recorded data but also a timestamp of the server. This way the client can estimate how much time elapsed on the server side since the last request was handled. It compares the elapsed time on the server side with the elapsed time on the client side, estimates a drift rate, and applies a correction to the recorded data. This would basically work without any internet UTC.
Upvotes: 1