Reputation: 1120
Customer machines send UDP requests to our server. The server processes each request and sends a response. The logic of the transactions requires the client to wait for a response before sending a new request.
Even if all processing by client and server machines is instantaneous, it appears our customers still need about 30ms on average just to send/receive a round trip transaction over the Internet. (That's traveling about 5,580 miles at light speed.)
Does that mean a given customer on average can't do more than about 120,000 synchronous transactions per hour?
1 transaction = .030 seconds minimum
120k transactions = 1 hour
Upvotes: 1
Views: 93
Reputation: 150188
Impact of Latency
Since you must serialize your requests, latency will limit your transaction rate.
However, the speed of light calculation is the theoretical best-case transit time. In real life, there are routers along the way that add latency.
Be sure and measure the actual ping time at various points in the day, over several days, to get real latency numbers.
Since the client and server code will process in less-than-zero-time, and the processing time might well be at least as much as the latency time (depending on what you are doing), it may not be realistic to assume the processing time approaches zero.
Overcoming Latency
These days, there are a number of fairly inexpensive ways to put your servers (or at least a layer of your architecture) closer to your customers. For example, you could consider using a service such as AWS to place processing resources in geographical proximity to your customers. You can then either give e.g. West Coast customers a different URL to use than East Coast customers, or you can use geographic load balancing so that everyone can use the same URL (your load balancing service routes traffic to the best server, worldwide). I have successfully used UltraDNS for that purpose in the past.
Upvotes: 1