dirtyw0lf
dirtyw0lf

Reputation: 1958

Web scraping - handling non fatal errors

Using web scrapers (bs4, selenium) and wondering if there is a better way to handle exceptions that are not fatal (i.e. continue running after exception).

There are a lot of try excepts in my code for every property and I would like to centralize that logic.

Note productDetails is an instance of ProductDetails which has setter and getter python properties. I thought about putting the except logic in the setter but by then it is too late because the request has to get the value first.

try:
    productDetails.image = soup.find("meta", property="og:image")[
except:
    productDetails.url_valid = False
    continue

Upvotes: 1

Views: 55

Answers (1)

Dos
Dos

Reputation: 2507

Errors should never pass silently. I suggest you handle all exceptions explicitly. Finally, avoid putting the except logic in the setter or getter methods.

Upvotes: 1

Related Questions