Freec0re 123
Freec0re 123

Reputation: 23

scrapy 503 Service Unavailable on starturl

I modifed this spider but it gives this errors

Gave up retrying <GET https://lib.maplelegends.com/robots.txt> (failed 3 times): 503 Service Unavailable 
2019-01-06 23:43:56 [scrapy.core.engine] DEBUG: Crawled (503) <GET https://lib.maplelegends.com/robots.txt> (referer: None)
2019-01-06 23:43:56 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://lib.maplelegends.com/?p=etc&id=4004003> (failed 1 times): 503 Service Unavailable
2019-01-06 23:43:56 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://lib.maplelegends.com/?p=etc&id=4004003> (failed 2 times): 503 Service Unavailable
2019-01-06 23:43:56 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET https://lib.maplelegends.com/?p=etc&id=4004003> (failed 3 times): 503 Service Unavailable
2019-01-06 23:43:56 [scrapy.core.engine] DEBUG: Crawled (503) <GET https://lib.maplelegends.com/?p=etc&id=4004003> (referer: None)
2019-01-06 23:43:56 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <503 https://lib.maplelegends.com/?p=etc&id=4004003>: HTTP status code is not handled or not allowed

Crawler code:

#!/usr/bin/env python3

import scrapy
import time

start_url = 'https://lib.maplelegends.com/?p=etc&id=4004003'


class MySpider(scrapy.Spider):
    name = 'MySpider'

    start_urls = [start_url]

    def parse(self, response):
        # print('url:', response.url)

        products = response.xpath('.//div[@class="table-responsive"]/table/tbody')

        for product in products:
            item = {
                #'name': product.xpath('./tr/td/b[1]/a/text()').extract(),
                'link': product.xpath('./tr/td/b[1]/a/@href').extract(),
            }

            # url = response.urljoin(item['link'])
            # yield scrapy.Request(url=url, callback=self.parse_product, meta={'item': item})

            yield response.follow(item['link'], callback=self.parse_product, meta={'item': item})

        time.sleep(5)

        # execute with low
        yield scrapy.Request(start_url, dont_filter=True, priority=-1)

    def parse_product(self, response):
        # print('url:', response.url)

        # name = response.xpath('(//strong)[1]/text()').re(r'(\w+)')

        hp = response.xpath('//*[contains(concat( " ", @class, " " ), concat( " ", "image", " " ))] | //img').re(r':(\d+)')

        scrolls = response.xpath('//*[contains(concat( " ", @class, " " ), concat( " ", "image", " " ))] | //strong+//a//img/@title').re(r'\bScroll\b')

        for price, hp, scrolls in zip(name, hp, scrolls):
            yield {'name': name.strip(), 'hp': hp.strip(), 'scroll':scrolls.strip()}

--- it runs without project and saves in output.csv ---

from scrapy.crawler import CrawlerRunner

def _run_crawler(spider_cls, settings):
    """
    spider_cls: Scrapy Spider class
    returns: Twisted Deferred
    """
    runner = CrawlerRunner(settings)
    return runner.crawl(spider_cls)     # return Deferred


def test_scrapy_crawler():
    deferred = _run_crawler(MySpider, settings)

    @deferred.addCallback
    def _success(results):
        """
        After crawler completes, this function will execute.
        Do your assertions in this function.
        """

    @deferred.addErrback
    def _error(failure):
        raise failure.value

    return deferred

Upvotes: 2

Views: 4016

Answers (1)

Granitosaurus
Granitosaurus

Reputation: 21436

Robots.txt

Your crawler is trying to check robots.txt file but the website doesn't have one present.

To avoid this you can set ROBOTSTXT_OBEY setting to false in your settings.py file.
By default it's False but new scrapy projects generated with scrapy startproject command has ROBOTSTXT_OBEY = True generated from the template.

503 responses

Further the website seems to respond as 503 on every first request. The website is using some sort of bot protection:

First request is 503 then some javascript is being executed to make an AJAX request for generating __shovlshield cookie:

enter image description here

Seems like https://shovl.io/ ddos protection is being used.

To solve this you need to reverse engineer how javascript generates the cookie or employ javascript rendering techniques/services such as selenium or splash

Upvotes: 4

Related Questions