Remixt
Remixt

Reputation: 597

Why does my program stop downloading very large files before finishing the full download?

I'm using this method to download files from some http links. The files generally exceed 20 gigabytes in size, and the program seems to work correctly, but then moves on to the next function before finishing.

import urllib2
import os
def saveFile(url):
    print url
    file_name = url.split('/')[-1]
    #check if the file was already downloaded previously to save time
    if os.path.isfile(file_name):
        print("File already exists, skipping download...")
        return file_name
    u = urllib2.urlopen(url)
    f = open(file_name, 'wb')
    meta = u.info()
    file_size = int(meta.getheaders("Content-Length")[0])
    print "Downloading: %s Bytes: %s" % (file_name, file_size)

    file_size_dl = 0
    block_sz = 8192
    while True:
        buffer = u.read(block_sz)
        if not buffer:
            break

        file_size_dl += len(buffer)
        f.write(buffer)
        status = r"%10d  [%3.2f%%]" % (file_size_dl, file_size_dl * 100. / file_size)
        status = status + chr(8)*(len(status)+1)
        print status,    
    f.close()
    return file_name

Upvotes: 0

Views: 140

Answers (1)

jihan1008
jihan1008

Reputation: 370

Did you checked the Process Manager? Maybe the memory has been occupied over the limits and OutOfMemoryError occurred.

Upvotes: 1

Related Questions