dabing1205
dabing1205

Reputation: 73

Scrapy: how to catch download error and try download it again

During my crawling, some pages failed due to unexpected redirection and no response returned. How can I catch this kind of error and re-schedule a request with original url, not with the redirected url?

Before I ask here, I do a lot of search with Google. Looks there's two ways to fix this issue. one is catch exception in a download middle-ware, the other is to process download exception in errback in spider's request. For these two questions, I have some questions.

class ProxyMiddleware(object):

    def process_request(self, request, spider):
        request.meta['proxy'] = "http://192.168.10.10"
        log.msg('>>>> Proxy %s'%(request.meta['proxy'] if request.meta['proxy'] else ""), level=log.DEBUG)
    def process_exception(self, request, exception, spider):
        log_msg('Failed to request url %s with proxy %s with exception %s' % (request.url, proxy if proxy else 'nil', str(exception)))
        #retry again.
        return request
class ProxytestSpider(Spider):

    name = "proxytest"
    allowed_domains = ["baidu.com"]
    start_urls = (
        'http://www.baidu.com/',
        )
    def make_requests_from_url(self, url):
        starturl = url
        request = Request(url, dont_filter=True,callback = self.parse, errback = self.download_errback)
        print "make requests"
        return request
    def parse(self, response):
        pass
        print "in parse function"        
    def download_errback(self, e):
        print type(e), repr(e)
        print repr(e.value)
        print "in downloaderror_callback"

Any suggestion for this recrawl issue is highly appreciated. Thanks in advance.

Regards

Bing

Upvotes: 7

Views: 4708

Answers (2)

Frederic Bazin
Frederic Bazin

Reputation: 1529

you can override the RETRY_HTTP_CODES in settings.py

This is the settings I use for proxy errors:

RETRY_HTTP_CODES = [500, 502, 503, 504, 400, 403, 404, 408] 

Upvotes: 0

dekomote
dekomote

Reputation: 4017

You could pass a lambda as an errback:

request = Request(url, dont_filter=True,callback = self.parse, errback = lambda x: self.download_errback(x, url))

that way you'll have access to the url inside the errback function:

def download_errback(self, e, url):
    print url

Upvotes: 2

Related Questions