Raul Reyes
Raul Reyes

Reputation: 453

How can I make this work? Should I use requests or urllib.error for exceptions?

I am trying to handle the exceptions from the http responses.

The PROBLEM with my code is that I am forced to use and IF condition to catch http error codes

if page.status_code != requests.codes.ok:
    page.raise_for_status()

I do not believe this is the right way to do it, I am trying the FOLLOWING

import requests

url = 'http://someurl.com/404-page.html'
myHeaders = {'User-agent': 'myUserAgent'}

s = requests.Session()

try:
    page = s.get(url, headers=myHeaders)
    #if page.status_code != requests.codes.ok:
    #     page.raise_for_status()
except requests.ConnectionError:
    print ("DNS problem or refused to connect")
    # Or Do something with it
except requests.HTTPError:
    print ("Some HTTP response error")
    #Or Do something with it
except requests.Timeout:
    print ("Error loading...too long")
    #Or Do something with it, perhaps retry
except requests.TooManyRedirects:
    print ("Too many redirect")
    #Or Do something with it
except requests.RequestException as e:
    print (e.message)
    #Or Do something with it
else:
    print ("nothing happen")
    #Do something if no exception

s.close()

This ALWAYS prints "nothing happen", How I would be able to catch all possible exceptions related to GET URL?

Upvotes: 1

Views: 79

Answers (1)

Padraic Cunningham
Padraic Cunningham

Reputation: 180411

You could catch a RequestException if you want to catch all the exceptions:

import requests

try:
    r = requests.get(........)
except requests.RequestException as e:
    print(e.message)

Upvotes: 1

Related Questions