Sam
Sam

Reputation: 313

Scrapy linkextractor, follow not working

I have tried to extract all link from a web. My spider is a subclass of a superclass called GeneralSpider. The problem is that when I change the name of the method 'parse_url' by parse (overriding a method of the superclass) link extractor get all links of the main page, but is not following the links. If I don't change the method name, spider does not work. Am I doing something wrong?

# -*- coding: utf-8 -*-

from core.generalSpider import GeneralSpider
from scrapy.linkextractors import LinkExtractor
from scrapy import log
from scrapy.contrib.spiders import Rule
from scrapy.item import Item, Field

from spiders.settings import GET_ITEMS


class MyItem(Item):
    url = Field()
    text = Field()
    item = Field()


class GetItemsSpider(GeneralSpider):

    name = GET_ITEMS
    start_urls = 'http://www.example.com'
    allowed_domains = ['example.com']
    rules = (Rule(LinkExtractor(allow=()), callback='parse_url', follow=True), )

    def __init__(self, port, **kwargs):
        super(GetItemsSpider, self).__init__(port, **kwargs)

        # User agent
        self.user_agent = Utils.get_random_item_from_list(core_settings.USER_AGENT_LIST)

        # Scrapy logs
        self.log('GetItemsSpider init start_urls= %s  parameters= %s ' %
                 (self.start_urls, str(self.parameters)), level=log.DEBUG)
        self.log('%s init start_urls= %s  parameters= %s ' %
                 (self.name, self.start_urls, str(self.parameters)), level=log.INFO)
        self.log('USER AGENT = %s' % self.user_agent, level=log.INFO)
        self.log('PORT = %s' % self._proxy_port, level=log.INFO)

    def parse_url(self, response):
        items = []
        self.log('GetItemsSpider parse start %s' % response.url, level=log.DEBUG)
        for link in LinkExtractor().extract_links(response):
            item = MyItem()
            item['text'] = link.text
            item['url'] = link.url
            items.append(item)
        return items

Upvotes: 1

Views: 2412

Answers (2)

Sam
Sam

Reputation: 313

At the end I could not find why my code was not working, but I found an alternative solution:

def parse_url(self, response):
    self.log('GetItemsSpider parse start %s' % response.url, level=log.DEBUG)
    for link in LinkExtractor().extract_links(response):
        item = MyItem()
        item['text'] = link.text
        item['url'] = link.url
        if condition:
            yield Request(urlparse.urljoin(response.url, link.url), callback=self.parse)
        yield item

This solution is based in Philip Adzanoukpe's example. I hope this can be useful.

Upvotes: 0

eLRuLL
eLRuLL

Reputation: 18799

there is no better explanation that the one on documentation, check the warning here

Just don't override parse.

Upvotes: 1

Related Questions