Daniel T.
Daniel T.

Reputation: 38400

How do I use cURL to perform multiple simultaneous requests?

I'd like to use cURL to test flood handling on my server. Right now I'm using this on a Windows command line:

curl www.example.com

which will GET the page once. I'd like to now do the same thing, except instead of one request, I want to generate at least 10 requests at once. How would I do this?

Upvotes: 57

Views: 98652

Answers (7)

Desolator
Desolator

Reputation: 22759

As of version 7.66.0 you can do the following using -Z, --parallel:

curl -Z https://example.com -o file1 https://example.com -o file2

Upvotes: 1

Rakesh
Rakesh

Reputation: 43

for ((i=1;i<=10;i++)); do curl  -I -k "https://www.example.com"; done

This is the script that work for me. Change the value of 10 in case you need to add more request.

Upvotes: 0

Elijah Lynn
Elijah Lynn

Reputation: 13468

Another method is to use GNU Parallel with Curl.

Here is a simplified example that does 100 curl requests total, 10 at a time (concurrently):

seq 100 | parallel --max-args 0 --jobs 10 "curl https://www.example.com"

seq 100 generates a list of numbers that we pipe into parallel:

1
2
3
4
5
6
7
8
9
10
... and so on

Then we use the --max-args 0 option which means that it will execute 1 job per argument. Don't change this number. Identically alias option is -n.

Docs say:

-n 0 means read one argument, but insert 0 arguments on the command line.

Then we use the --jobs 10 option which will run up to 10 jobs in parallel/concurrently. Identically alias options are -j, --procs/-P.

Docs say:

Number of jobslots on each machine. Run up to N jobs in parallel. 0 means as many as possible. Default is 100% which will run one job per CPU core on each machine.

Below is a more functional example which prints the return code, hides the output and depending on if the command succeeded (&&) or not (||) print a SUCCESS or FAIL message along with it, which I find useful for my debugging purposes:

seq 100 | parallel --max-args 0 --jobs 10 "curl -w '%{http_code}\n' https://www.example.com --output /dev/null --location --silent && printf SUCCESS\n\n || printf FAIL\n\n"

example output of using GNU Parallel to Curl

Upvotes: 5

Sergey Geron
Sergey Geron

Reputation: 10172

Starting from 7.68.0 curl can fetch several urls in parallel. This example will fetch urls from urls.txt file with 3 parallel connections:

curl --parallel --parallel-immediate --parallel-max 3 --config urls.txt

urls.txt:

url = "example1.com"
url = "example2.com"
url = "example3.com"
url = "example4.com"
url = "example5.com"

Upvotes: 37

Jona
Jona

Reputation: 1288

I had a similar case and I ended up writing a python script:

import threading
import requests

def thread_function(url):
    response = requests.get(url)

thread1 = threading.Thread(target=thread_function, args=('http://foo',))
thread2 = threading.Thread(target=thread_function, args=('http://bar',))

thread1.start()
thread2.start()

Upvotes: 1

Juan Rada
Juan Rada

Reputation: 3766

Curl may not do it itself but bash can.

curl -o 1.txt -X GET https://foo & curl -o 2.txt -X GET https://foo

Upvotes: 7

sgmorrison
sgmorrison

Reputation: 986

While curl is a very useful and flexible tool, isn't intended for this type of use. There are other tools available which will let you make multiple concurrent requests to the same URL.

ab is a very simple yet effective tool of this type, which works for any web server (despite the introduction focusing on Apache server).

Grinder is a more sophisticated tool, which can let you specify many different URLs to use in a load test. This lets you mix requests for cheap and expensive pages, which may more closely resemble standard load for your website.

Upvotes: 43

Related Questions