Crypto
Crypto

Reputation: 1217

Scrapy: Unhandled Error

My scraper runs fine for about an hour. After a while I start seeing these errors:

2014-01-16 21:26:06+0100 [-] Unhandled Error
        Traceback (most recent call last):
          File "/home/scraper/.fakeroot/lib/python2.7/site-packages/Scrapy-0.20.2-py2.7.egg/scrapy/crawler.py", line 93, in start
            self.start_reactor()
          File "/home/scraper/.fakeroot/lib/python2.7/site-packages/Scrapy-0.20.2-py2.7.egg/scrapy/crawler.py", line 130, in start_reactor
            reactor.run(installSignalHandlers=False)  # blocking call
          File "/home/scraper/.fakeroot/lib/python2.7/site-packages/twisted/internet/base.py", line 1192, in run
            self.mainLoop()
          File "/home/scraper/.fakeroot/lib/python2.7/site-packages/twisted/internet/base.py", line 1201, in mainLoop
            self.runUntilCurrent()
        --- <exception caught here> ---
          File "/home/scraper/.fakeroot/lib/python2.7/site-packages/twisted/internet/base.py", line 824, in runUntilCurrent
            call.func(*call.args, **call.kw)
          File "/home/scraper/.fakeroot/lib/python2.7/site-packages/Scrapy-0.20.2-py2.7.egg/scrapy/utils/reactor.py", line 41, in __call__
            return self._func(*self._a, **self._kw)
          File "/home/scraper/.fakeroot/lib/python2.7/site-packages/Scrapy-0.20.2-py2.7.egg/scrapy/core/engine.py", line 106, in _next_request
            if not self._next_request_from_scheduler(spider):
          File "/home/scraper/.fakeroot/lib/python2.7/site-packages/Scrapy-0.20.2-py2.7.egg/scrapy/core/engine.py", line 132, in _next_request_from_scheduler
            request = slot.scheduler.next_request()
          File "/home/scraper/.fakeroot/lib/python2.7/site-packages/Scrapy-0.20.2-py2.7.egg/scrapy/core/scheduler.py", line 64, in next_request
            request = self._dqpop()
          File "/home/scraper/.fakeroot/lib/python2.7/site-packages/Scrapy-0.20.2-py2.7.egg/scrapy/core/scheduler.py", line 94, in _dqpop
            d = self.dqs.pop()
          File "/home/scraper/.fakeroot/lib/python2.7/site-packages/queuelib/pqueue.py", line 43, in pop
            m = q.pop()
          File "/home/scraper/.fakeroot/lib/python2.7/site-packages/Scrapy-0.20.2-py2.7.egg/scrapy/squeue.py", line 18, in pop
            s = super(SerializableQueue, self).pop()
          File "/home/scraper/.fakeroot/lib/python2.7/site-packages/queuelib/queue.py", line 157, in pop
            self.f.seek(-size-self.SIZE_SIZE, os.SEEK_END)
        exceptions.IOError: [Errno 22] Invalid argument

What could possibly be causing this? My version is 0.20.2. Once I get this error, scrapy stops doing anything. Even if I stop and run it again (using a JOBDIR directory), it still gives me these errors. I need to delete the job directory and start over if I need to get rid of these errors.

Upvotes: 3

Views: 1290

Answers (1)

Marcelo Amorim
Marcelo Amorim

Reputation: 1682

Try this:

  • Ensure that you're running latest Scrapy version (current: 0.24)
  • Search inside the resumed folder, and backup the file requests.seen
  • After backed up, remove the scrapy job folder
  • Start the crawl resuming with JOBDIR= option again
  • Stop the crawl
  • Replace the newly created requests.seen with previously backed up
  • Start crawl again

Upvotes: 3

Related Questions