MuchHelping
MuchHelping

Reputation: 631

Scrapy Cloud spider requests fail with GeneratorExit

I have a Scrapy multi-level spider which works locally, but returns GeneratorExit in Cloud on every request.

Here're parse methods:

def parse(self, response):
    results = list(response.css(".list-group li a::attr(href)"))
    for c in results:
        meta = {}
        for key in response.meta.keys():
            meta[key] = response.meta[key]
        yield response.follow(c,
                              callback=self.parse_category,
                              meta=meta,
                              errback=self.errback_httpbin)

def parse_category(self, response):
    category_results = list(response.css(
        ".item a.link-unstyled::attr(href)"))
    category = response.css(".active [itemprop='title']")
    for r in category_results:
        meta = {}
        for key in response.meta.keys():
            meta[key] = response.meta[key]
        meta["category"] = category
        yield response.follow(r, callback=self.parse_item,
                              meta=meta,
                              errback=self.errback_httpbin)

def errback_httpbin(self, failure):
    # log all failures
    self.logger.error(repr(failure))

Here's the traceback:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/scrapy/utils/defer.py", line 102, in iter_errback
    yield next(it)
GeneratorExit

[stderr] Exception ignored in: <generator object iter_errback at 0x7fdea937a9e8>

File "/usr/local/lib/python3.6/site-packages/twisted/internet/base.py", line 1243, in run
    self.mainLoop()
  File "/usr/local/lib/python3.6/site-packages/twisted/internet/base.py", line 1252, in mainLoop
    self.runUntilCurrent()
  File "/usr/local/lib/python3.6/site-packages/twisted/internet/base.py", line 878, in runUntilCurrent
    call.func(*call.args, **call.kw)
  File "/usr/local/lib/python3.6/site-packages/twisted/internet/task.py", line 671, in _tick
    taskObj._oneWorkUnit()
--- <exception caught here> ---
  File "/usr/local/lib/python3.6/site-packages/twisted/internet/task.py", line 517, in _oneWorkUnit
    result = next(self._iterator)
  File "/usr/local/lib/python3.6/site-packages/scrapy/utils/defer.py", line 63, in <genexpr>
    work = (callable(elem, *args, **named) for elem in iterable)
  File "/usr/local/lib/python3.6/site-packages/scrapy/core/scraper.py", line 183, in _process_spidermw_output
    self.crawler.engine.crawl(request=output, spider=spider)
  File "/usr/local/lib/python3.6/site-packages/scrapy/core/engine.py", line 210, in crawl
    self.schedule(request, spider)
  File "/usr/local/lib/python3.6/site-packages/scrapy/core/engine.py", line 216, in schedule
    if not self.slot.scheduler.enqueue_request(request):
  File "/usr/local/lib/python3.6/site-packages/scrapy/core/scheduler.py", line 57, in enqueue_request
    dqok = self._dqpush(request)
  File "/usr/local/lib/python3.6/site-packages/scrapy/core/scheduler.py", line 86, in _dqpush
    self.dqs.push(reqd, -request.priority)
  File "/usr/local/lib/python3.6/site-packages/queuelib/pqueue.py", line 35, in push
    q.push(obj) # this may fail (eg. serialization error)
  File "/usr/local/lib/python3.6/site-packages/scrapy/squeues.py", line 15, in push
    s = serialize(obj)
  File "/usr/local/lib/python3.6/site-packages/scrapy/squeues.py", line 27, in _pickle_serialize
    return pickle.dumps(obj, protocol=2)
builtins.TypeError: can't pickle HtmlElement objects

I set a errback but it doesn't provide any error details. Also I wrote meta in every request, but it doesn't make any difference. Am I missing something?

Update: It seems that the error is inherent to multi level spiders in particular. For now, I rewrote this one with just one parse method.

Upvotes: 1

Views: 511

Answers (1)

elacuesta
elacuesta

Reputation: 911

One of the differences between running a job locally and on Scrapy Cloud is that the JOBDIR setting is enabled, which makes Scrapy serialize requests into a disk queue instead of a memory one.

When serializing to disk, the Pickle operation fails because your request.meta dict contains a SelectorList object (assigned in the line category = response.css(".active [itemprop='title']")), and the selectors contain instances of lxml.html.HtmlElement objects (which cannot be pickled, and this issue is not in the Scrapy scope), hence the TypeError: can't pickle HtmlElement objects.

There is a merged pull request that addresses this issue. It does not fix the Pickle operation, what it does is indicate the Scheduler that it should not try to serialize to disk these kind of requests, they go to memory instead.

Upvotes: 6

Related Questions