Hossein
Hossein

Reputation: 41811

Recursive use of Scrapy to scrape webpages from a website

I have recently started to work with Scrapy. I am trying to gather some info from a large list which is divided into several pages(about 50). I can easily extract what I want from the first page including the first page in the start_urls list. However I don't want to add all the links to these 50 pages to this list. I need a more dynamic way. Does anyone know how I can iteratively scrape web pages? Does anyone have any examples of this?

Thanks!

Upvotes: 1

Views: 1261

Answers (2)

Alex
Alex

Reputation: 4362

use urllib2 to download a page. Then use either re (regular expressions) or BeautifulSoup (an HTML parser) to find the link to the next page you need. Download that with urllib2. Rinse and repeat.

Scapy is great, but you dont need it to do what you're trying to do

Upvotes: 1

Jeffrey Greenham
Jeffrey Greenham

Reputation: 1432

Why don't you want to add all the links to 50 pages? Are the URLs of the pages consecutive like www.site.com/page=1, www.site.com/page=2 or are they all distinct? Can you show me the code that you have now?

Upvotes: 0

Related Questions