Reputation: 923
I have a client-server application I'm trying to optimize. I built a psydo-client to bang against my server's apis. I run the client on one box & the server on another. I'm trying to correlate the times of certain events between the two where the times are recorded in terms of each local system's local system clock. The client sends a request and records that time. The server receives that request and records that time. The server does it's processing an forms/sends a response, recording that time. The client records the time of the it finishes receiving the response.
Ultimately, what I'm trying to do is improve through-put as measured by the client's request-sent and response-received.
Am I missing something by trying to meaningfully correlate the clocks on the two systems? Is this even possible? If so how is it done? How do you measure/improve upon this through-put?
Currently my client is telling me I'm doing 25 requests-sent-to-response-received per second (or an interval of 0.04 seconds average) for 19,000+ transactions. But the two time stamps on the server is tellimg me I'm turning around a transaction, request-received-to-response-replied in 0.020 seconds average (scaled up capacity ~ max: 50 transactions/sec) Meaning 1/2 the beginning-to-end time is data 'on the line' (to credit Vince Vaughn). If I have to regard the time on the line as fixed and can only optimize the server turnaround that means, and assuming I can reduce this to 0, then my max througt-put can be no greater than 50 transactions per second. I'd think this could be reduced to a 1/100th of this. Only 50 transactions / second seems crazy slow for a 1G network where a packet only hasto travel one switch and the entire length of about 50' of cable.
So how to you correlate the two system times? How do you measure this through-put?
Upvotes: 0
Views: 94
Reputation: 923
Been there and this is what I did.
Have the client record the request-sent time AND send that to the server.
When received by the server have the server calculate the TimeSpan
as reported by the client and against the server's own DateTime.Now
.
Have the server record it's times relative to the TimeSpan
or DateTime.Now.
Subtract(tsVariable);
You can (marginally*) think of the server as recording times in terms of the 'Client's' clock.
You need to Subtract
the difference because if the Server's clock is a little bit ahead of the Client's then subtract the difference to make the server's DateTime.Now
more closely reflect the relative DateTime.Now
of the client. In this case the TimeSpan will be a (+) positive value. If the server's clock is a little behind the client's then TimeSpan will be (-) negative. But still Subtract the TimeSpan
. This will be the what's called "subtracting a negative" which is the same a adding the TimeSpan
to catch the server's DateTime.Now
up to the relative client's DateTime.Now
.
When the response gets back to the client, have the client record it's own clock again to record the final response-received. The server does not need to report its time to the client.
*The down side is the ticks that tickoff between when the client first develops what it is going to report to the server AND the ticks up to when the server receives that report then calculates the TimeSpan
are not known by either side. But I had to come to believe that isn't overly consequential, or at least not in a pragmatic sense.
Another scenario I played with is have the client first get the server's time. This should be as hyper-express of a fetch as possible.
Have the client calculate the TimeSpan
, use that in recording and in reporting to the server , .... you might find this level of effort doesn't provide a lot of benefit.
One HIGH-END option is a third system recording the time on the other two systems for you and it will constantly be recording time differences between the two systems and it's own clock AND/OR adjust the clocks on the two systems.
Upvotes: 0
Reputation: 121
That's quite a cool test - your technique sounds like a good solution.
Are you saving the date & times answers somewhere? Could it be the time difference (0.04 & 0.02 secs respectively) is due to how 'long' it takes to record those dates? i.e. if you saving to a database for example and it may take a bit of time for the insert/update to complete due to something like a big table with indexes, etc?
EDIT I tried below simulating using a WCF server & client running on the same machine - to eliminate that the WCF itself could be slow for whatever reason. It appears not to be the case so I can only recommend trying to find out if the event logging might be causing the delays or if there is indeed some weird lag on your network setup
My server code:
public interface IServiceWCF
{
[OperationContract]
DateTime TestConnectionSpeed(DateTime messageSentFromClientTime, out DateTime messageReceivedAtServerTime, out int millisecondsBetweenClientSentAndServerReceived);
}
public class ServiceWCF : IServiceWCF
{
public DateTime TestConnectionSpeed(DateTime messageSentFromClientTime, out DateTime messageReceivedAtServerTime, out int millisecondsBetweenClientSentAndServerReceived)
{
messageReceivedAtServerTime = DateTime.Now;
TimeSpan span = messageReceivedAtServerTime - messageSentFromClientTime;
millisecondsBetweenClientSentAndServerReceived = (int)span.TotalMilliseconds;
return DateTime.Now;
}
}
My client code
int millisecondsBetweenClientSentAndServerReceived;
DateTime clientSent = DateTime.Now;
DateTime serverReceived;
DateTime serverSent = wcfService.TestConnectionSpeed(clientSent, out serverReceived, out millisecondsBetweenClientSentAndServerReceived);
DateTime responseReceived = DateTime.Now;
TimeSpan span = responseReceived - serverSent;
int millisecondsBetweenServerSentAndClientReceived = (int)span.TotalMilliseconds;
Console.WriteLine("Message sent from client at {0} - server received {1} milliseconds later at {2} - server response sent at {3} - was received at client {4} milliseconds later at {5}",
clientSent,
millisecondsBetweenClientSentAndServerReceived,
serverReceived,
serverSent,
millisecondsBetweenServerSentAndClientReceived,
responseReceived);
And the answer is mostly very fast - 1 millisecond - see sample output:
Message sent from client at 3/24/2017 3:56:22 PM - server received 1 milliseconds later at 3/24/2017 3:56:22 PM - server response sent at 3/24/2017 3:56:22 PM - was received at client 1 milliseconds later at 3/24/2017 3:56:22 PM
Upvotes: 1