user2270029
user2270029

Reputation: 871

Scrapy - Exclude Unwanted URLs (Like Comments)

I am using Scrapy to crawl websites to get all pages, but my current code rules are still allowing me to get unwanted URLs such as a comment links like "http://www.example.com/some-article/comment-page-1" in addition to the post's main URL. What can I add to rules to exclude these unwanted items? Here is my current code:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.item import Item

class MySpider(CrawlSpider):
    name = 'crawltest'
    allowed_domains = ['example.com']
    start_urls = ['http://www.example.com']
    rules = [Rule(SgmlLinkExtractor(allow=[r'/\d+']), follow=True), Rule(SgmlLinkExtractor(allow=[r'\d+']), callback='parse_item')]

    def parse_item(self, response):
        #do something

Upvotes: 3

Views: 6124

Answers (1)

dm03514
dm03514

Reputation: 55962

SgmlLinkExtractor has an optional argument called deny, This will only match the rule if allow regex is true and deny regex is false

example from docs:

rules = (
        # Extract links matching 'category.php' (but not matching 'subsection.php')
        # and follow links from them (since no callback means follow=True by default).
        Rule(SgmlLinkExtractor(allow=('category\.php', ), deny=('subsection\.php', ))),

        # Extract links matching 'item.php' and parse them with the spider's method parse_item
        Rule(SgmlLinkExtractor(allow=('item\.php', )), callback='parse_item'),
    )

Perhaps you could check that the url does not contain the word comment?

Upvotes: 2

Related Questions