alessmar
alessmar

Reputation: 4727

Not able to follow links using Scrapy

I've created a spider that extends CrawlSpider and followed the advice at http://scrapy.readthedocs.org/en/latest/topics/spiders.html

The problem is that I need to parse both the start url (which happens to coincide with the hostname) and some links that it cointains.

So I've defined a rule like: rules = [Rule(SgmlLinkExtractor(allow=['/page/d+']), callback='parse_items', follow=True)], but nothing happens.

Then I've tried to define a set of rules like: rules = [Rule(SgmlLinkExtractor(allow=['/page/d+']), callback='parse_items', follow=True), Rule(SgmlLinkExtractor(allow=['/']), callback='parse_items', follow=True)]. The problem now is that the spider parses everything.

How can I tell the spider to parse the _start_url_ and only some links that it includes?

Update:

I've tried to override the parse_start_url method, so now I'm able to get data from the start page, but it still doesn't follow links defined with a Rule:

class ExampleSpider(CrawlSpider):
  name = 'TechCrunchCrawler'
  start_urls = ['http://techcrunch.com']
  allowed_domains = ['techcrunch.com']
  rules = [Rule(SgmlLinkExtractor(allow=['/page/d+']), callback='parse_links', follow=True)]

  def parse_start_url(self, response):
      print '++++++++++++++++++++++++parse start url++++++++++++++++++++++++'
      return self.parse_links(response)

  def parse_links(self, response):
      print '++++++++++++++++++++++++parse link called++++++++++++++++++++++++'
      articles = []
      for i in HtmlXPathSelector(response).select('//h2[@class="headline"]/a'):
          article = Article()
          article['title'] = i.select('./@title').extract()
          article['link'] = i.select('./@href').extract()
          articles.append(article)

      return articles

Upvotes: 1

Views: 1009

Answers (2)

Steven Almeroth
Steven Almeroth

Reputation: 8192

You forgot to backslash-escape the letter d as \d:

>>> SgmlLinkExtractor(allow=r'/page/d+').extract_links(response)
[]
>>> SgmlLinkExtractor(allow=r'/page/\d+').extract_links(response)
[Link(url='http://techcrunch.com/page/2/', text=u'Next Page',...)]

Upvotes: 1

user1460015
user1460015

Reputation: 2003

I had a similar problem in the past.
I stuck with BaseSpider.

Try this:

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.http import Request
from scrapy.contrib.loader import XPathItemLoader

from techCrunch.items import Article


class techCrunch(BaseSpider):
    name = 'techCrunchCrawler'
    allowed_domains = ['techcrunch.com']

    # This gets your start page and directs it to get parse manager
    def start_requests(self):
        return [Request("http://techcrunch.com", callback=self.parseMgr)]

    # the parse manager deals out what to parse and start page extraction
    def parseMgr(self, response):
        print '++++++++++++++++++++++++parse start url++++++++++++++++++++++++'
        yield self.pageParser(response)

        nextPage = HtmlXPathSelector(response).select("//div[@class='page-next']/a/@href").extract()
        if nextPage:
            yield Request(nextPage[0], callback=self.parseMgr)

    # The page parser only parses the pages and returns items on each page call
    def pageParser(self, response):
        print '++++++++++++++++++++++++parse link called++++++++++++++++++++++++'
        loader = XPathItemLoader(item=Article(), response=response)
        loader.add_xpath('title', '//h2[@class="headline"]/a/@title')
        loader.add_xpath('link', '//h2[@class="headline"]/a/@href')
        return loader.load_item()

Upvotes: 1

Related Questions