Reputation: 1291
On my Gatling reports, I noticed that "Response Time Percentiles" and "Latency Percentiles over time" charts are quite identical. In which way are they different?
I saw this post, which makes me even more unsure:
Latency Percentiles over Time (OK) – same as Response Time Percentiles over Time (OK), but showing the time needed for the server to process the request, although it is incorrectly called latency. By definition Latency + Process Time = Response time. So this graphic is supposed to give the time needed for a request to reach the server. Checking real-life graphics I think this graphic shows not the Latency, but the real Process Time. You can get an idea of the real Latency by taking one and the same second from Response Time Percentiles over Time (OK) and subtract values from current graphs for the same second.
Thanks in advance for your help.
Upvotes: 0
Views: 1300
Reputation: 1
Latency basically tells how long it takes to receive the first packet for each page request throughout the duration of your load test. If you look at this chart in the Gatling documentation, the first spike is just before 21:30:20 on the x axis and tells you that 100% of the pages requested took longer than 1000 milliseconds to get the first packet from source to destination, but that number fell significantly after 21:30:20.
Upvotes: 0