Nils
Nils

Reputation: 31

Force scrapy to crawl link in order they appear

I'm writing a spider with scrapy to crawl a website, the index page is a list of link like www.link1.com, www.link2.com, www.link3.com and that site is updated really often, so my crawler is part of a process that runs everey hours, but I would like to crawl only the new link that i havent crawled yet. my problem is that scrapy randomise the way it treats each link when going deep. is it possible to force sracpy to crawl in order ? Like 1 then 2 and then 3, so that I can save the last link that i've crawled and when starting the process again just compare link 1 with formerly link 1 ?

Hope this is understandable, sorry for my poor english,

kindly response,

thanks

EDIT :

class SymantecSpider(CrawlSpider):

    name = 'symantecSpider'
    allowed_domains = ['symantec.com']
    start_urls = [
        'http://www.symantec.com/security_response/landing/vulnerabilities.jsp'
        ]
    rules = [Rule(SgmlLinkExtractor(restrict_xpaths=('//div[@class="mrgnMD"]/following-sibling::table')), callback='parse_item')]

    def parse_item(self, response):
        open("test.t", "ab").write(response.url + "\n")

Upvotes: 3

Views: 4606

Answers (2)

user1460015
user1460015

Reputation: 2003

Try this example.
Construct a list and append all the links to it.
Then pop them one by one to get your requests in order.

I recommend doing something like @Hassan mention and pipe your contents to a database.

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.http import Request
from scrapy import log


class SymantecSpider(BaseSpider):
    name = 'symantecSpider'
    allowed_domains = ['symantec.com']
    allLinks = []
    base_url = "http://www.symantec.com"

    def start_requests(self):
        return [Request('http://www.symantec.com/security_response/landing/vulnerabilities.jsp', callback=self.parseMgr)]

    def parseMgr(self, response):
        # This grabs all the links and append them to allLinks=[]
        self.allLinks.append(HtmlXPathSelector(response).select("//table[@class='defaultTableStyle tableFontMD tableNoBorder']/tbody/tr/td[2]/a/@href").extract())
        return Request(self.base_url + self.allLinks[0].pop(0), callback=self.pageParser)

    # Cycle through the allLinks[] in order
    def pageParser(self, response):
        log.msg('response: %s' % response.url, level=log.INFO)
        return Request(self.base_url + self.allLinks[0].pop(0), callback=self.pageParser)

Upvotes: 3

Steven Almeroth
Steven Almeroth

Reputation: 8192

SgmlLinkExtractor will extract links in the same order they appear on the page.

from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
links = SgmlLinkExtractor(
    restrict_xpaths='//div[@class="mrgnMD"]/following-sibling::table',
        ).extract_links(response)

You can use them in the rules in your CrawlSpider:

class ThreatSpider(CrawlSpider):
    name = 'threats'
    start_urls = [
        'http://www.symantec.com/security_response/landing/vulnerabilities.jsp',
    ]
    rules = (Rule(SgmlLinkExtractor(
                restrict_xpaths='//div[@class="mrgnMD"]/following-sibling::table')
            callback='parse_threats'))

Upvotes: 1

Related Questions