McEnroe
McEnroe

Reputation: 653

How to crawl a website/extract data into database with python?

I'd like to build a webapp to help other students at my university create their schedules. To do that I need to crawl the master schedules (one huge html page) as well as a link to a detailed description for each course into a database, preferably in python. Also, I need to log in to access the data.

Upvotes: 12

Views: 69296

Answers (4)

sharjeel
sharjeel

Reputation: 6035

Scrapy is probably the best Python library for crawling. It can maintain state for authenticated sessions.

Dealing with binary data should be handled separately. For each file type, you'll have to handle it differently according to your own logic. For almost any kind of format, you'll probably be able to find a library. For instance take a look at PyPDF for handling PDFs. For excel files you can try xlrd.

Upvotes: 4

Riz
Riz

Reputation: 368

For this purpose there is a very useful tool called web-harvest Link to their website http://web-harvest.sourceforge.net/ I use this to crawl webpages

Upvotes: 0

Acorn
Acorn

Reputation: 50587

If you want to use a powerful scraping framework there's Scrapy. It has some good documentation too. It may be a little overkill depending on your task though.

Upvotes: 12

Alexey Grigorev
Alexey Grigorev

Reputation: 2425

I liked using BeatifulSoup for extracting html data

It's as easy as this:

from BeautifulSoup import BeautifulSoup 
import urllib

ur = urllib.urlopen("http://pragprog.com/podcasts/feed.rss")
soup = BeautifulSoup(ur.read())
items = soup.findAll('item')

urls = [item.enclosure['url'] for item in items]

Upvotes: 3

Related Questions