Reputation: 5729
I have
import urllib2
try:
urllib2.urlopen("some url")
except urllib2.HTTPError:
<whatever>
but what I end up is catching any kind of HTTP error. I want to catch only if the specified webpage doesn't exist (404?).
Upvotes: 101
Views: 193018
Reputation: 51
If from urllib.error import HTTPError
doesn't work, try using from requests.exceptions import HTTPError
.
Sample:
from requests.exceptions import HTTPError
try:
<access some url>
except HTTPError:
# Handle the error like ususal
Upvotes: 3
Reputation: 2520
For Python 3.x
import urllib.request
import urllib.error
try:
urllib.request.urlretrieve(url, fullpath)
except urllib.error.HTTPError as err:
print(err.code)
Upvotes: 50
Reputation: 336128
Python 3
from urllib.error import HTTPError
Python 2
from urllib2 import HTTPError
Just catch HTTPError
, handle it, and if it's not Error 404, simply use raise
to re-raise the exception.
See the Python tutorial.
Here is a complete example for Python 2:
import urllib2
from urllib2 import HTTPError
try:
urllib2.urlopen("some url")
except HTTPError as err:
if err.code == 404:
<whatever>
else:
raise
Upvotes: 172
Reputation: 354
Tim's answer seems to me as misleading especially when urllib2
does not return the expected code. For example, this error will be fatal (believe or not - it is not uncommon one when downloading urls):
AttributeError: 'URLError' object has no attribute 'code'
Fast, but maybe not the best solution would be code using nested try/except block:
import urllib2
try:
urllib2.urlopen("some url")
except urllib2.HTTPError as err:
try:
if err.code == 404:
# Handle the error
else:
raise
except:
...
More information to the topic of nested try/except blocks Are nested try/except blocks in python a good programming practice?
Upvotes: 5