goh
goh

Reputation: 29511

scrapy deny rules not being ignored

I have some rules that I dynamically grabbed from the database and add them in my spider:

        self.name =  exSettings['site']
        self.allowed_domains = [exSettings['root']]
        self.start_urls = ['http://' + exSettings['root']]

        self.rules =  [Rule(SgmlLinkExtractor(allow=(exSettings['root'] + '$',)), follow= True)]
        denyRules = []

        for rule in exSettings['settings']:
            linkRegex = rule['link_regex']

            if rule['link_type'] == 'property_url':
                propertyRule = Rule(SgmlLinkExtractor(allow=(linkRegex,)), follow=True, callback='parseProperty')
                self.rules.insert(0, propertyRule)
                self.listingEx.append({'link_regex': linkRegex, 'extraction': rule['extraction']})

            elif rule['link_type'] == 'project_url':
                projectRule = Rule(SgmlLinkExtractor(allow=(linkRegex,)), follow=True) #not set to crawl yet due to conflict if same links appear for both
                self.rules.insert(0, projectRule)

            elif rule['link_type'] == 'favorable_url':
                favorableRule = Rule(SgmlLinkExtractor(allow=(linkRegex,)), follow=True)
                self.rules.append(favorableRule)

            elif rule['link_type'] == 'ignore_url':
                denyRules.append(linkRegex)

        #somehow all urls will get ignored if allow is empty and put as the first rule
        d = Rule(SgmlLinkExtractor(allow=('testingonly',), deny=tuple(denyRules)), follow=True)

        #self.rules.insert(0,d) #I have tried with both status but same results
        self.rules.append(d)

And I have the following rules in my database:

link_regex: /listing/\d+/.+  (property_url)
link_regex: /project-listings/.+    (favorable_url)
link_regex: singapore-property-listing/   (favorable_url)
link_regex: /mrt/  (ignore_url)

And I see this in my log:

 http://www.propertyguru.com.sg/singapore-property-listing/property-for-sale/mrt/125/ang-mo-kio-mrt-station> (referer: http://www.propertyguru.com.sg/listing/8277630/for-sale-thomson-grand-6-star-development-)

Aren't /mrt/ supposed to be denied? Why do I still have the above link crawled?

Upvotes: 1

Views: 1275

Answers (1)

reclosedev
reclosedev

Reputation: 9502

As far as I know deny arguments must be in the same SgmlLinkExtractor, which has allow patterns.

In your case you created SgmlLinkExtractor which allows favorable_url ('singapore-property-listing/'). But this extractor doesn't have any deny patterns, so it extracts /mrt/ too.

To fix this you should add deny patterns to correspondent SgmlLinkExtractors. Also, see related question.

Maybe there is some ways to define global deny patterns, but I haven't seen them.

Upvotes: 2

Related Questions