superdee
superdee

Reputation: 697

Scrapy finishing early, not getting all links

I am trying to run a webspider that gets all URLs for a specific url. Right now it is returning about 64 urls when I know there are hundred and thousands more. Anyone know why it is finishing early?

class MySpider(BaseSpider):
    custom_settings = {
        'AUTOTHROTTLE_ENABLED': True,
        'DOWNLOAD_DELAY': 1.5
    }

    name = 'www.shopgoodwill.com'
    allowed_domains = ['www.shopgoodwill.com']
    start_urls = [
        'https://www.shopgoodwill.com'
    ]

    def __init__(self, alexa_site_id, *args, **kwargs):
        super(MySpider, self).__init__(*args, **kwargs)
        self.alexa_site_id = alexa_site_id

    def parse(self, response):
        le = LinkExtractor()
        for link in le.extract_links(response):
            yield Request(link.url, callback=self.parse_item)

Here are the results, I noted is says request_depth_max:1 but I have my DEPTH_LIMIT=0 in the settings

2019-02-19 23:31:03 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 14739,
 'downloader/request_count': 32,
 'downloader/request_method_count/GET': 32,
 'downloader/response_bytes': 336986,
 'downloader/response_count': 32,
 'downloader/response_status_count/200': 23,
 'downloader/response_status_count/302': 9,
 'dupefilter/filtered': 11,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2019, 2, 19, 23, 31, 3, 824302),
 'log_count/DEBUG': 38,
 'log_count/INFO': 22,
 'memusage/max': 108908544,
 'memusage/startup': 108908544,
 'offsite/domains': 5,
 'offsite/filtered': 5,
 'request_depth_max': 1,
 'response_received_count': 23,
 'scheduler/dequeued': 32,
 'scheduler/dequeued/memory': 32,
 'scheduler/enqueued': 32,
 'scheduler/enqueued/memory': 32,
 'start_time': datetime.datetime(2019, 2, 19, 23, 30, 4, 918201)}
2019-02-19 23:31:03 [scrapy.core.engine] INFO: Spider closed (finished)
Closing spider (finished)
Dumping Scrapy stats:
{'downloader/request_bytes': 14739,
 'downloader/request_count': 32,
 'downloader/request_method_count/GET': 32,
 'downloader/response_bytes': 336986,
 'downloader/response_count': 32,
 'downloader/response_status_count/200': 23,
 'downloader/response_status_count/302': 9,
 'dupefilter/filtered': 11,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2019, 2, 19, 23, 31, 3, 824302),
 'log_count/DEBUG': 38,
 'log_count/INFO': 22,
 'memusage/max': 108908544,
 'memusage/startup': 108908544,
 'offsite/domains': 5,
 'offsite/filtered': 5,
 'request_depth_max': 1,
 'response_received_count': 23,
 'scheduler/dequeued': 32,
 'scheduler/dequeued/memory': 32,
 'scheduler/enqueued': 32,
 'scheduler/enqueued/memory': 32,
 'start_time': datetime.datetime(2019, 2, 19, 23, 30, 4, 918201)}
Spider closed (finished)

Upvotes: 0

Views: 302

Answers (1)

malberts
malberts

Reputation: 2536

As per our comments under the question, you need to extract links in parse_item() too. If you only extract in parse() then subsequent links will not be followed.

Upvotes: 1

Related Questions