BrynJ
BrynJ

Reputation: 8382

Scrapy - unable to make additional request in XMLFeedSpider

I have a scrapy spider that uses XMLFeedSpider. As well as the data returned for each node in parse_node(), I also need to make an additional request to get more data. The only issue, is if I yield an additional request from parse_node() nothing gets returned at all:

class MySpidersSpider(XMLFeedSpider):
    name = "myspiders"
    namespaces = [('g', 'http://base.google.com/ns/1.0')]
    allowed_domains = {"www.myspiders.com"}
    start_urls = [
        "https://www.myspiders.com/productMap.xml"
        ]
    iterator = 'iternodes'
    itertag = 'item'

    def parse_node(self, response, node):
        if(self.settings['CLOSESPIDER_ITEMCOUNT'] and int(self.settings['CLOSESPIDER_ITEMCOUNT']) == self.item_count):
            raise CloseSpider('CLOSESPIDER_ITEMCOUNT limit reached - ' + str(self.settings['CLOSESPIDER_ITEMCOUNT']))
        else:
            self.item_count += 1
        id = node.xpath('id/text()').extract()
        title = node.xpath('title/text()').extract()
        link = node.xpath('link/text()').extract()
        image_link = node.xpath('g:image_link/text()').extract()
        gtin = node.xpath('g:gtin/text()').extract()
        product_type = node.xpath('g:product_type/text()').extract()
        price = node.xpath('g:price/text()').extract()
        sale_price = node.xpath('g:sale_price/text()').extract()
        availability = node.xpath('g:availability/text()').extract()

        item = MySpidersItem()
        item['id'] = id[0]
        item['title'] = title[0]
        item['link'] = link[0]
        item['image_link'] = image_link[0]
        item['gtin'] = gtin[0]
        item['product_type'] = product_type[0]
        item['price'] = price[0]
        item['sale_price'] = '' if len(sale_price) == 0 else sale_price[0]
        item['availability'] = availability[0]

        yield Request(item['link'], callback=self.parse_details, meta={'item': item})

    def parse_details(self, response):
        item = response.meta['item']
        item['price_per'] = 'test'
        return item

If I change the last line of parse_node() to return item it works fine (without setting price_per in the item, naturally).

Any idea what I'm doing wrong?

Upvotes: 2

Views: 245

Answers (2)

BrynJ
BrynJ

Reputation: 8382

I discovered the issue - I was limiting the number of items processed in my parse_node() function. However, because of the limit, my spider was terminating prior to the request being made. Moving the code to limit the item processed to my parse_details() function resolves the issue:

    def parse_details(self, response):
        if(self.settings['CLOSESPIDER_ITEMCOUNT'] and int(self.settings['CLOSESPIDER_ITEMCOUNT']) == self.item_count):
            raise CloseSpider('CLOSESPIDER_ITEMCOUNT limit reached - ' + str(self.settings['CLOSESPIDER_ITEMCOUNT']))
        else:
            self.item_count += 1
        item = response.meta['item']
        item['price_per'] = 'test'
        return item

Upvotes: 1

Ceili
Ceili

Reputation: 1308

Have you tried checking the contents of item['link']? If it is a relative link (example: /products?id=5), the URL won't return anything and the request will fail. You need to make sure it's a resolvable link (example: https://www.myspiders.com/products?id=5).

Upvotes: 1

Related Questions