Emir
Emir

Reputation: 481

Extracting parts of a webpage with python

So I have a data retrieval/entry project and I want to extract a certain part of a webpage and store it in a text file. I have a text file of urls and the program is supposed to extract the same part of the page for each url.

Specifically, the program copies the legal statute following "Legal Authority:" on pages such as this. As you can see, there is only one statute listed. However, some of the urls also look like this, meaning that there are multiple separated statutes.

My code works for pages of the first kind:

from sys import argv
from urllib2 import urlopen

script, urlfile, legalfile = argv
input = open(urlfile, "r")
output = open(legalfile, "w")

def get_legal(page):
    # this is where Legal Authority: starts in the code
    start_link = page.find('Legal Authority:')
    start_legal = page.find('">', start_link+1)
    end_link = page.find('<', start_legal+1)
    legal = page[start_legal+2: end_link]
    return legal

for line in input:
  pg = urlopen(line).read()
  statute = get_legal(pg)
  output.write(get_legal(pg))

Giving me the desired statute name in the "legalfile" output .txt. However, it cannot copy multiple statute names. I've tried something like this:

def get_legal(page):
# this is where Legal Authority: starts in the code
    end_link = ""
    legal = ""
    start_link = page.find('Legal Authority:')
    while (end_link != '</a>&nbsp;'):
        start_legal = page.find('">', start_link+1)

        end_link = page.find('<', start_legal+1)
        end2 = page.find('</a>&nbsp;', end_link+1)
        legal += page[start_legal+2: end_link] 
        if 
        break
    return legal

Since every list of statutes ends with '</a>&nbsp;' (inspect the source of either of the two links) I thought I could use that fact (having it as the end of the index) to loop through and collect all the statutes in one string. Any ideas?

Upvotes: 0

Views: 2538

Answers (2)

tiwo
tiwo

Reputation: 3369

They provide XML data over there, see my comment. If you think you can't download that many files (or the other end could dislike so many HTTP GET requests), I'd recommend asking their admins if they would kindly provide you with a different way of accessing the data.

I have done so twice in the past (with scientific databases). In one instance the sheer size of the dataset prohibited a download; they ran a SQL query of mine and e-mailed the results (but had previously offered to mail a DVD or hard disk). In another case, I could have done some million HTTP requests to a webservice (and they were ok) each fetching about 1k bytes. This would have taken long, and would have been quite inconvenient (requiring some error-handling, since some of these requests would always time out) (and non-atomic due to paging). I was mailed a DVD.

I'd imagine that the Office of Management and Budget could possibly be similar accomodating.

Upvotes: 0

Mark Gemmill
Mark Gemmill

Reputation: 5949

I would suggest using BeautifulSoup to parse and search your html. This will be much easier than doing basic string searches.

Here's a sample that pulls all the <a> tags found within the <td> tag that contains the <b>Legal Authority:</b> tag. (Note that I'm using requests library to fetch page content here - this is just a recommended and very easy to use alternative to urlopen.)

import requests
from BeautifulSoup import BeautifulSoup

# fetch the content of the page with requests library
url = "http://www.reginfo.gov/public/do/eAgendaViewRule?pubId=200210&RIN=1205-AB16"
response = requests.get(url)

# parse the html
html = BeautifulSoup(response.content)

# find all the <a> tags
a_tags = html.findAll('a', attrs={'class': 'pageSubNavTxt'})


def fetch_parent_tag(tags):
    # fetch the parent <td> tag of the first <a> tag
    # whose "previous sibling" is the <b>Legal Authority:</b> tag.
    for tag in tags:
        sibling = tag.findPreviousSibling()
        if not sibling:
            continue
        if sibling.getText() == 'Legal Authority:':
            return tag.findParent()

# now, just find all the child <a> tags of the parent.
# i.e. finding the parent of one child, find all the children
parent_tag = fetch_parent_tag(a_tags)
tags_you_want = parent_tag.findAll('a')

for tag in tags_you_want:
    print 'statute: ' + tag.getText()

If this isn't exactly what you needed to do, BeautifulSoup is still the tool you likely want to use for sifting through html.

Upvotes: 2

Related Questions