Georgina
Georgina

Reputation: 311

Python - Easiest way to scrape text from list of URLs using BeautifulSoup

What's the easiest way to scrape just the text from a handful of webpages (using a list of URLs) using BeautifulSoup? Is it even possible?

Best, Georgina

Upvotes: 6

Views: 6482

Answers (3)

Hugh Bothwell
Hugh Bothwell

Reputation: 56624

import urllib2
import BeautifulSoup
import re

Newlines = re.compile(r'[\r\n]\s+')

def getPageText(url):
    # given a url, get page content
    data = urllib2.urlopen(url).read()
    # parse as html structured document
    bs = BeautifulSoup.BeautifulSoup(data, convertEntities=BeautifulSoup.BeautifulSoup.HTML_ENTITIES)
    # kill javascript content
    for s in bs.findAll('script'):
        s.replaceWith('')
    # find body and extract text
    txt = bs.find('body').getText('\n')
    # remove multiple linebreaks and whitespace
    return Newlines.sub('\n', txt)

def main():
    urls = [
        'http://www.stackoverflow.com/questions/5331266/python-easiest-way-to-scrape-text-from-list-of-urls-using-beautifulsoup',
        'http://stackoverflow.com/questions/5330248/how-to-rewrite-a-recursive-function-to-use-a-loop-instead'
    ]
    txt = [getPageText(url) for url in urls]

if __name__=="__main__":
    main()

It now removes javascript and decodes html entities.

Upvotes: 6

philnext
philnext

Reputation: 3402

I know that it is not an answer to your exact question (about BeautifulSoup) but a good idea is to have a look at Scrapy which seems to fit yous needs.

Upvotes: 1

Jiri
Jiri

Reputation: 16625

It is perfectly possible. Easiest way is to iterate through list of URLs, load the content, find the URLs, add them to main list. Stop iteration when enough pages are found.

Just some tips:

  • urllib2.urlopen for fetching content
  • BeautifulSoup: findAll('a') for finding URLs

Upvotes: 1

Related Questions