Marco
Marco

Reputation: 95

file upload using requests library in python

I have the following script that allows me to upload files to usersfiles. It does not work for large files (eg 1GB). What change could make?

import requests

import random

import re

filehandle = open("menu.avi", "rb")

resp=requests.get("https://usersfiles.com/")

sess_id = re.search('sess_id.*=?"(.*)?"', str(resp.text)).group(1)

srv_tmp_url = re.search('srv_tmp_url.*=?"(.*)?"', str(resp.text)).group(1)

upload_type = re.search('upload_type.*=?"(.*)?"', str(resp.text)).group(1)

UID = ''

for i in range(0, 12):

    UID = UID + '' + str(random.randint(0,10))

url2="https://up11.usersfiles.com/cgi-bin/upload.cgi?upload_id="+UID+"&js_on=1&utype=reg&upload_type="+upload_type

r = requests.post(url2, data={"upload_type":upload_type , "sess_id":sess_id,
                                "srv_tmp_url":srv_tmp_url}, files = {"file_0":filehandle})

link_usersfiles = re.search('name=.fn.>(.*?)<', str(r.text)).group(1)

This script generates me the error:

body.write(data)

MemoryError

Upvotes: 0

Views: 291

Answers (1)

Damien Ayers
Damien Ayers

Reputation: 1639

By default when uploading files, requests reads the entire file into memory, and so is liable to run out when uploading large files. The easiest way around this is to install requests-toolbelt which can easily stream file uploads.

For your example, you could use something like this:

import requests
from requests_toolbelt.multipart.encoder import MultipartEncoder

# ... code for preparing for upload ...

m = MultipartEncoder(
    fields={'upload_type': upload_type, 'sess_id': sess_id,
            'file_0': ('filename', file handle, 'text/plain')}
    )

r = requests.post(url2, data=m,
                  headers={'Content-Type': m.content_type})

For further information see https://toolbelt.readthedocs.org/en/latest/uploading-data.html

Upvotes: 1

Related Questions