Thomas Jensen
Thomas Jensen

Reputation: 860

Determining number of sites on a website in python

I have the following link:

http://www.europarl.europa.eu/sides/getDoc.do?type=REPORT&mode=XML&reference=A7-2010-0001&language=EN

the reference part of the url has the following information:

A7 == The parliament (current is the seventh parliament, the former is A6 and so forth)

2010 == year

0001 == document number

For every year and parliament I would like to identify the number of documents on the website. The task is complicated by the fact that for 2010, for instance, numbers 186, 195,196 have empty pages, while the max number is 214. Ideally the output should be a vector with all the document numbers, excluding the missing ones.

Can anyone tell me if this is possible in python?

Best, Thomas

Upvotes: 0

Views: 93

Answers (3)

zoli2k
zoli2k

Reputation: 3458

Here is a solution, but adding some timeout between request is a good idea:

import urllib
URL_TEMPLATE="http://www.europarl.europa.eu/sides/getDoc.do?type=REPORT&mode=XML&reference=A7-%d-%.4d&language=EN"
maxRange=300

for year in [2010, 2011]:
    for page in range(1,maxRange):
        f=urllib.urlopen(URL_TEMPLATE%(year, page))
        text=f.read()
        if "<title>Application Error</title>" in text:
            print "year %d and page %.4d NOT found" %(year, page)
        else:
            print "year %d and page %.4d FOUND" %(year, page)
        f.close()

Upvotes: 1

Jon Mills
Jon Mills

Reputation: 1885

Here's a slightly more complete (but hacky) example which seems to work(using urllib2) - I'm sure you can customise it for your specific needs.

I'd also repeat Arrieta's warning about making sure the site's owner doesn't mind you scraping it's content.

#!/usr/bin/env python
import httplib2
h = httplib2.Http(".cache")

parliament = "A7"
year = 2010

#Create two lists, one list of URLs and one list of document numbers.
urllist = []
doclist = []

urltemplate = "http://www.europarl.europa.eu/sides/getDoc.do?type=REPORT&mode=XML&reference=%s-%d-%04u&language=EN"

for document in range(0,9999):
    url = urltemplate % (parliament,year,document)
    resp, content = h.request(url, "GET")
    if content.find("Application Error") == -1:
        print "Document %04u exists" % (document)    
        urllist.append(urltemplate % (parliament,year,document))
        doclist.append(document)
    else:
        print "Document %04u doesn't exist" % (document)
print "Parliament %s, year %u has %u documents" % (parliament,year,len(doclist))

Upvotes: 1

Escualo
Escualo

Reputation: 42082

First, make sure that scraping their site is legal.

Second, notice that when a document is not present, the HTML file contains:

<title>Application Error</title>

Third, use urllib to iterate over all the things you want to:

for p in range(1,7):
 for y in range(2000, 2011):
  doc = 1
  while True:
    # use urllib to open the url: (root)+p+y+doc
    # if the HTML has the string "application error" break from the while
    doc+=1

Upvotes: 3

Related Questions