Reputation: 53
I have a question and I appreciate any help.
To make long story short Days ago in a meeting with a qa we talk about the differences between # request and # of users in a case.
I think is different but depends of the case, when I used a service o something like that it doesn't matter, because 1 user perform 10 request in one minute is the same than 10 user perform each 1 request in the same time
But For example if I need to validate the service's throughput send 1000 request in 1 hour, does it matters if I send 1000 request but not from 1000 users in 1 hour?
I want to know other perspective and know other opinions
Thanks in advance for any help
Sorry in advance for the english
Upvotes: 0
Views: 742
Reputation: 168082
To make long story short: it depends on the application under test implementation.
Personally I would stick to the following approach: 1 virtual user = 1 real user with all its stuff (for web applications it would be browser-derived headers, cache, JavaScript-driven calls, handling images, scripts, styles, caching, think times, etc.)
The idea is to simulate real life usage of the system under test as close as possible where 1 user == 1 producer (or consumer) and then measure the number of requests per second generated by X concurrent users as a part of other performance metrics. See What is the Relationship Between Users and Hits Per Second? article for more details.
Another important factor is session, it might be the case the system under test can consider second, third, etc. logins a continuation of the first one or simply don't allow multiple logins.
If you're testing a "stupid" service which doesn't differentiate the incoming requests and doesn't cache the responses it doesn't really matter whether you use 1 thread or 1000 threads as the difference will be only in "sent bytes" metric if credentials have different length (sending 1 request per 3.6 seconds doesn't look like a realistic load test to me in any case)
Upvotes: 1