Reputation: 1654
Right now I am trying to execute asynchronous requests without any related tie-in to each other, similar to how FTP can upload / download more than one file at once.
I am using the following code:
rec = reuests.get("https://url", stream=True)
With
rec.raw.read()
To get responses.
But I am wishing to be able to execute this same piece of code much faster with no need to wait for the server to respond, which takes about 2 seconds each time.
Upvotes: 0
Views: 29
Reputation: 9427
The easiest way to do something like that is to use threads.
Here is a rough example of one of the ways you might do this.
import requests
from multiprocessing.dummy import Pool # the exact import depends on your python version
pool = Pool(4) # the number represents how many jobs you want to run in parallel.
def get_url(url):
rec = requests.get(url, stream=True)
return rec.raw.read()
for result in pool.map(get_url, ["http://url/1", "http://url/2"]:
do_things(result)
Upvotes: 2