Reputation: 16313
I would like to test an upload service with hundreds, if not thousands, of slow HTTPS connections simultaneously. I would like to have lots of, say, 3G-quality connections, each throttled with low bandwidth and high latency, each sending a few megabytes of data up to the server, resulting in lots of concurrent, long-lived requests being handled by the server.
There are many load generation tools that can generate thousands of simultaneous requests. (I'm currently using Locust, mostly so that I can take advantage of my existing client library written in Python.) Such tools typically run each concurrent request as fast as possible over the shared network link.
There are various ways to adjust the apparent bandwidth and latency of TCP connections, such as Linux's TC and handy wrappers like Comcast.
As far as I can tell, TC and the like control the shared link but they cannot throttle the individual requests. If you want to throttle a single request, TC works well. In theory, with many clients sharing the same throttled network link, each request could be run serially, subject to the constrained bandwidth, rather than having lots of requests executing concurrently, a few packets at a time. The former would result in much fewer active requests executing concurrently on the server.
I suspect that the tool I want has to actively manage each individual client's sending and receiving to throttle them fairly. Is there such a tool?
Upvotes: 1
Views: 848
Reputation: 5692
Yes, these are network simulators. A very primitive one is in the form of WanEM. It is not going to cover your testing needs. You will need something akin to Shunra Storm, a hardware device which can manage individual connections and impairment with models derived from Ookla (think speedtest.com) related to 3,4,5g connections from the wild. Well, perhaps I should say, "could manage," as this product has been absent since the HP acquisition of Shunra.
There are some other market competitors on the network front from companies such as Ixia, Agilent, PacketStorm, Spirent and the like. None of them are inexpensive, but I see your need. Slow, and particularly dirty connections likes cell phones, have a disproportionate impact on the stack and can result in the server running out of resources with fewer mobile connections than desktop ones.
On a side note, be sure you are including a representative model for think time in your test code. If you collapse the client-server model with no or extremely limited think time & impair the network only bad things can happen. This will play particular havoc with both predictability and repeatability on your tests. You may also wind up chasing dozens of engineering ghosts related to load in your code that will not occur in production because of the natural delays and the release of resources which should occur during those windows of activity between client requests.
Upvotes: 1
Reputation: 168082
You can take a look at Apache JMeter, it can "throttle" connections to the throughput configurable via the following properties:
httpclient.socket.http.cps=0
httpclient.socket.https.cps=0
The properties can be defined either in user.properties file or passed to JMeter via -J command-line argument
cps
stands for character per second
so you can "slow down" JMeter threads (virtual users) to the given throughput rate, the formula for cps
calculation is:
cps = (target bandwidth in kbps * 1024) / 8
Check out How to Simulate Different Network Speeds in Your JMeter Load Test for more information.
Upvotes: 1