cpat
cpat

Reputation: 47

python not catching HTTPError

My code is the following:

import json
import urllib2
from urllib2 import HTTPError

def karma_reddit(user):
    while True:
        try:
            url = "https://www.reddit.com/user/" + str(user) + ".json"
            data = json.load(urllib2.urlopen(url))
        except urllib2.HTTPError as err:
            if err == "Too Many Requests":
                continue
            if err == "Not Found":
                print str(user) + " isn't a valid username."
            else:
                raise
        break

I'm trying to get the data from the reddit user profile. However HTTPErrors keep occuring. When trying to catch them using the except statement they keep coming up without the program executing either another iteration of the loop or the print statement. How do I manage to catch the HTTPErrors? I'm pretty new to Python so this might be a rookie mistake. Thanks!

Upvotes: 0

Views: 1112

Answers (1)

Padraic Cunningham
Padraic Cunningham

Reputation: 180512

You need to check err.msg for the string, err itself is never equal to either so you always reach the else:raise :

if err.msg == "Too Many Requests":
     continue
if err.msg == "Not Found":
     print str(user) + " isn't a valid username."

I would recommend using requests and with reddit the error code is actually returned in the json so you can use that:

import requests


def karma_reddit(user):
    while True:
        data = requests.get("https://www.reddit.com/user/" + str(user) + ".json").json()
        if data.get("error") == 429:
            print("Too many requests")
        elif data.get("error") == 404:
            print str(user) + " isn't a valid username."
        return data

The fact you are raising all exceptions bar your 429 and 404's means you don't need a try. You should really break on any error and just output a message to the user and limit the amount of requests.

Upvotes: 1

Related Questions