Reputation: 3280
Because of geographic distance between server and client network latency can vary a lot. So I want to get "pure" req. processing time of service without network latency.
I want to get network latency as TCP connecting time. As far as I understand this time depends a lot on network.
Main idea is to compute:
I divide TCP connecting by 2 because in fact there are 2 requests-response (3-way handshake).
I have two questions:
PS: As a tool I use Erlang's gen_tcp. I can show the code.
Upvotes: 7
Views: 2243
Reputation: 11
I have worked on a similar question when working for a network performance monitoring vendor in the past. IMHO, there are a certain number of questions to be asked before proceeding:
So if you use only the first (and reliable) measurement you may miss some network delay variation (especially in apps using long lasting TCP sessions).
Upvotes: 1
Reputation: 18966
If at all, i guess the "pure" service time = TCP first packet receive - TCP connecting.. You have written other way round.
A possible answer to your first questions is , you should ideally compute atleast some sort of average by considering pure service time of many packets rather than just first packet.
Ideally it can also have worst case, average case, best case service times.
For second question to be answered we would need why would you need pure service time only. I mean since it is a network application, network latencies(connection time etc...) should also be included in the "response time", not just pure service time. That is my view based on given information.
Upvotes: 4