AimiHat
AimiHat

Reputation: 383

Scrapy spider doesn't receive spider_idle signal

I have spider that processes requests in chain using meta to yield items that have data from multiple requests. The way I used to generate requests is initiating all requests the first time the parse function is called, however, if I have too many links to request not all of them are scheduled and I don't get everything I need in the end.

To fix that, I am trying to make the spider request 5 products at a time, requesting again when the spider is idle (by connecting a signal in from_crawler). The problem is that as my code is right now, spider_idle does not run the request function and the spider closes immediately. It is as if the spider doesn't go idle.

Here is some of the code:

class ProductSpider(scrapy.Spider):
    def __init__(self, *args, **kwargs):
        super(ProductSpider, self).__init__(*args, **kwargs)
        self.parsed_data = []
        self.header = {}
        f = open('file.csv', 'r')
        f_data = [[x.strip()] for x in f]
        count=1
        first = 'smth'
        for product in f_data:
            if first != '':
                header = product[0].split(';')
                for each in range(len(header[1:])):
                    self.header[header[each+1]] = each+1
                first = ''
            else:
                product = product[0].split(';')
                product.append(count)
                count+=1
                self.parsed_data.append(product)
        f.close()

    @classmethod
    def from_crawler(cls, crawler, *args, **kwargs):
        spider = super(ProductSpider, cls).from_crawler(crawler, *args, **kwargs)
        crawler.signals.connect(spider.request, signal=signals.spider_idle)
        return spider

    name = 'products'
    allowed_domains = [domains]
    handle_httpstatus_list = [400, 404, 403, 503, 504]

    start_urls = [start]

    def next_link(self,response):
        product = response.meta['product']
        there_is_next = False
        for each in range(response.meta['each']+1, len(product)-1):
            if product[each] != '':
                there_is_next = True
                yield scrapy.Request(product[each], callback=response.meta['func_dict'][each], meta={'func_dict': response.meta['func_dict'],'product':product,'each':each,'price_dict':response.meta['price_dict'], 'item':response.meta['item']}, dont_filter=True)
                break
        if not there_is_next:
            item = response.meta['item']
            item['prices'] = response.meta['price_dict']
            yield item

    #[...] chain parsing functions for each request

    def get_products(self):
        products = []
        data = self.parsed_data

        for each in range(5):
            if data:
                products.append(data.pop())
        return products

    def request(self):
        item = Header()
        item['first'] = True
        item['sellers'] = self.header
        yield item

        func_dict = {parsing_functions_for_every_site}

        products = self.get_products()
        if not products:
            return

        for product in products:

            item = Product()

            price_dict = {1:product[1]}
            item['name'] = product[0]
            item['order'] = product[-1]

            for each in range(2, len(product)-1):
                if product[each] != '':
                    #print each, func_dict, product[each]
                    yield scrapy.Request(product[each], callback=func_dict[each], 
                    meta={'func_dict': func_dict,'product':product,
                    'each':each,'price_dict':price_dict, 'item':item})
                    break

        raise DontCloseSpider

 def parse(self, response=None):
        pass

Upvotes: 4

Views: 929

Answers (1)

eLRuLL
eLRuLL

Reputation: 18799

I assume you already proved that your request method is being reached and the actual problem is that this method isn't yielding the requests (and even the items).

This is a common mistake when dealing with signals in Scrapy, as the associated methods can't yield items/requests. The way to bypass this is using

for request:

request = Request('myurl', callback=self.method_to_parse)
self.crawler.engine.crawl(
    request,
    spider
)

for item:

item = MyItem()
self.crawler.engine.scraper._process_spidermw_output(
    item, 
    None, 
    Response(''), 
    spider,
)

Also, the spider_idle signal method needs to receive the spider argument, so in your case it should be like:

def request(self, spider):
    ...

It should work, but I would recommend a better method name.

Upvotes: 5

Related Questions