DeltaWeb
DeltaWeb

Reputation: 381

Increase download speed of requests

I'm writing a script that download some videos from dropbox. Usually my downloading speed is around 150 kb/sec, this when using normal downloader on firefox or IDM. But while using this Python script, things get too slow: less than 10 kb / sec.

Here's the code I'm using :

def download(url,i):
    local_filename = "video_" + str(i) + ".mp4"
    # NOTE the stream=True parameter
    r = requests.get(url, stream=True)
    with open(local_filename, 'wb') as f:
        n = 0
        for chunk in r.iter_content(chunk_size=1000000): 
            if chunk: # filter out keep-alive new chunks
                n = n +1
                f.write(chunk)
                print "downloading " + str(n)
                f.flush() 
    return local_filename

Is there anyway I could speed up the download?

Upvotes: 2

Views: 5281

Answers (1)

reticentroot
reticentroot

Reputation: 3682

I believe that 1000000mb is to large of a value to load at one time. You can't do anything with chunk while all that data is buffering into memory. I would reduce the size to 512mb or 1gb if the file size is smaller than 512mb then the chunk size should be smaller than that, otherwise you're not really loading chunks, but the entire file...still if you search the net a bit you'll see that its a hot topic debate still. Example code:

def download(url,i):
    local_filename = "video_" + str(i) + ".mp4"
    with open(local_filename, 'wb') as f, closing(requests.get(url, stream=True)) as res:
        for n, chunk in enumerate(res.iter_content(chunk_size=512), start=1):
          f.write(chunk)
          print "downloading " + str(n)
    return local_filename

The advance docs suggest that when streaming to wrap it in the closing context. It should stabilize requests. The if chunk is not needed to filter out keep alive nor do you have to flush, just write to the file stream directly.

Upvotes: 2

Related Questions