Yunti
Yunti

Reputation: 7428

requests library and http error

I'm currently using the python requests library to interact with an external api which uses json. Each endpoint works via a method (of the api class) and uses the collect_data method.

However I want the scraper to continue running whenever it encounters a http error (and ideally output this to a log).
What's the best way to do this as currently it just breaks when I use http.raise_for_status()

It seems like I should be using a try/except in someway but not sure how best to do this here?

def scrape_full_address(self, house_no, postcode):
        address_path = '/api/addresses'
        address_url = self.api_source + address_path
        payload = {
            'houseNo': house_no,
            'postcode': postcode,
        }
        return self.collect_data(url=address_url, method='get', payload=payload)


def collect_data(self, url, method, payload=None):
        if method == 'get':
            data = None
            params = payload
        elif method == 'post':
            params = None
            data = payload
        response = getattr(requests, method)(url=url, params=params, json=data, headers=self.headers)
        if response.status_code == 200:
            return response.json()
        else:
            return response.raise_for_status()  

Upvotes: 1

Views: 4991

Answers (1)

Kerry Hatcher
Kerry Hatcher

Reputation: 601

When you call scrape_full_address() elsewhere in your code wrap that in a try statement.

For more info see: https://wiki.python.org/moin/HandlingExceptions

try:
    scrape_full_address(659, 31052)
except HTTPError:
    print "Oops!  That caused an error.  Try again..."

Upvotes: 2

Related Questions