AI52487963
AI52487963

Reputation: 179

Avoiding 503 errors with urllib2

I'm new to web scraping with python, so I don't know if I'm doing this right.

I'm using a script that calls BeautifulSoup to parse the URLs from the first 10 pages of a google search. Tested with stackoverflow.com, worked just fine out-of-the-box. I tested with another site a few times, trying to see if the script was really working with higher google page requests, then it 503'd on me. I switched to another URL to test and worked for a couple, low-page requests, then also 503'd. Now every URL I pass to it is 503'ing. Any suggestions?

import sys # Used to add the BeautifulSoup folder the import path
import urllib2 # Used to read the html document

if __name__ == "__main__":
### Import Beautiful Soup
### Here, I have the BeautifulSoup folder in the level of this Python script
### So I need to tell Python where to look.
sys.path.append("./BeautifulSoup")
from BeautifulSoup import BeautifulSoup

### Create opener with Google-friendly user agent
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]

### Open page & generate soup
### the "start" variable will be used to iterate through 10 pages.
for start in range(0,10):
    url = "http://www.google.com/search?q=site:stackoverflow.com&start=" + str(start*10)
    page = opener.open(url)
    soup = BeautifulSoup(page)

    ### Parse and find
    ### Looks like google contains URLs in <cite> tags.
    ### So for each cite tag on each page (10), print its contents (url)
    for cite in soup.findAll('cite'):
        print cite.text

Upvotes: 0

Views: 1975

Answers (2)

methode
methode

Reputation: 5438

As Ettore said, scraping the search results is against our ToS. However check out the WebSearch api, specifically the bottom section of the documentation which should give you a hint about how to access the API from non-javascipt environments.

Upvotes: 0

Ettore
Ettore

Reputation: 93

Automated querying is not permitted by Google Terms of Service. See this article for information: Unusual traffic from your computer and also Google Terms of service

Upvotes: 5

Related Questions