user3316569
user3316569

Reputation: 11

python urllib2.URLError handling

I'm extracting live feed from a webpage in json. The page updates every minute so I would extract the data every minute. But the server of that webpage is sometimes not very stable and my code stops running. I want to write it in a way that my code will constantly requesting for data until it gets it. Someone wrote something like this before:

try: 
    f=urllib2.open(blablabla...)

except urllib2.HTTPError, detail:
    if detail.error == 500:
      time.sleep(1)
      continue
    else:
      raise

But my code still stops running with this error: urllib2.URLError: urlopen error [Errno 8] nodename nor servname provided, or not known

Upvotes: 1

Views: 1769

Answers (1)

poke
poke

Reputation: 388383

urllib2.HTTPError is a subtype of urllib2.URLError. So when open raises an URLError, you don’t actually catch that when just looking for HTTPErrors. If you want to catch URLErrors too, add another except clause to handle it.

Upvotes: 1

Related Questions