Reputation: 2587
I have two Python scripts. One uses the Urllib2 library and one uses the Requests library.
I have found Requests easier to implement, but I can't find an equivalent for urlib2's read()
function. For example:
...
response = url.urlopen(req)
print response.geturl()
print response.getcode()
data = response.read()
print data
Once I have built up my post url, data = response.read()
gives me the content - I am trying to connect to a vcloud director api instance and the response shows the endpoints that I have access to. However if I use the Requests library as follows.....
....
def post_call(username, org, password, key, secret):
endpoint = '<URL ENDPOINT>'
post_url = endpoint + 'sessions'
get_url = endpoint + 'org'
headers = {'Accept':'application/*+xml;version=5.1', \
'Authorization':'Basic '+ base64.b64encode(username + "@" + org + ":" + password), \
'x-id-sec':base64.b64encode(key + ":" + secret)}
print headers
post_call = requests.post(post_url, data=None, headers = headers)
print post_call, "POST call"
print post_call.text, "TEXT"
print post_call.content, "CONTENT"
post_call.status_code, "STATUS CODE"
....
....the print post_call.text
and print post_call.content
returns nothing, even though the status code equals 200 in the requests post call.
Why isn't my response from Requests returning any text or content?
Upvotes: 191
Views: 804794
Reputation: 3790
If the response is in json you could do something like (python3):
import json
import requests as reqs
# Make the HTTP request.
response = reqs.get('https://demo.ckan.org/api/3/action/group_list')
# Use the json module to load CKAN's response into a dictionary.
response_dict = json.loads(response.text)
for i in response_dict:
print("key: ", i, "val: ", response_dict[i])
To see everything in the response you can use .__dict__
:
print(response.__dict__)
Edit in May 2024 to add a suggestion on how to address if objects in the response dict is not JSON serializable.
import json
...
print(json.dumps(response.text, indent=4, sort_keys=True, default=lambda o:'<not serializable>'))
Upvotes: 73
Reputation: 654
To read any particular JSON entity, you can use this
response.get('id') or response.get('expected fields to be read')
Upvotes: 0
Reputation: 106
There are three different ways for you to get the contents of the response you have got.
response.content
) - libraries like beautifulsoup
accept input as binaryresponse.json()
) - most of the API calls give response in this format onlyresponse.text
) - serves any purpose including regex based search, or dumping data to a file etc.Depending the type of webpage you are scraping, you can use the attribute accordingly.
Upvotes: 6
Reputation: 787
If the Response is in Json you can directly use below method in Python3, no need for json import
and json.loads()
method:
response.json()
Upvotes: 8
Reputation: 300
If you push, for example image, to some API and want the result address(response) back you could do:
import requests
url = 'https://uguu.se/api.php?d=upload-tool'
data = {"name": filename}
files = {'file': open(full_file_path, 'rb')}
response = requests.post(url, data=data, files=files)
current_url = response.text
print(response.text)
Upvotes: 12
Reputation: 25569
Requests doesn't have an equivalent to Urlib2's read()
.
>>> import requests
>>> response = requests.get("http://www.google.com")
>>> print response.content
'<!doctype html><html itemscope="" itemtype="http://schema.org/WebPage"><head>....'
>>> print response.content == response.text
True
It looks like the POST request you are making is returning no content. Which is often the case with a POST request. Perhaps it set a cookie? The status code is telling you that the POST succeeded after all.
Edit for Python 3:
Python now handles data types differently. response.content
returns a sequence of bytes
(integers that represent ASCII) while response.text
is a string
(sequence of chars).
Thus,
>>> print response.content == response.text
False
>>> print str(response.content) == response.text
True
Upvotes: 260