thefan12345
thefan12345

Reputation: 136

Scrapy python - I keep getting Crawled 0 pages

I have tried to follow multiple tutorials but no matter what I try I'm always getting the same result "Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)"

my code is very simple:

import scrapy

class SpiderSpider(scrapy.Spider):
    name = 'spider'
    allowed_domains = ['books.toscrape.com/']
    start_urls = ['http://books.toscrape.com//']

    def parse(self, response):
        print(response.url)

the output is:

2020-11-03 22:11:52 [scrapy.utils.log] INFO: Scrapy 2.4.0 started (bot: books) 2020-11-03 22:11:52 [scrapy.utils.log] INFO: Versions: lxml 4.5.2.0, libxml2 2.9.10, cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 20.3.0, Python 3.8.3 (default, Jul 2 2020, 11:26:31) - [Clang 10.0.0 ], pyOpenSSL 19.1.0 (OpenSSL 1.1.1g 21 Apr 2020), cryptography 2.9.2, Platform macOS-10.15.7-x86_64-i386-64bit 2020-11-03 22:11:52 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor 2020-11-03 22:11:52 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'books', 'NEWSPIDER_MODULE': 'books.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['books.spiders']} 2020-11-03 22:11:52 [scrapy.extensions.telnet] INFO: Telnet Password: ae1669f089ac9e66 2020-11-03 22:11:52 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.memusage.MemoryUsage', 'scrapy.extensions.logstats.LogStats'] 2020-11-03 22:11:52 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware', 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2020-11-03 22:11:52 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2020-11-03 22:11:52 [scrapy.middleware] INFO: Enabled item pipelines: [] 2020-11-03 22:11:52 [scrapy.core.engine] INFO: Spider opened 2020-11-03 22:11:52 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2020-11-03 22:11:52 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 2020-11-03 22:11:53 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://books.toscrape.com/robots.txt> (referer: None) 2020-11-03 22:11:53 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://books.toscrape.com//> (referer: None) http://books.toscrape.com// 2020-11-03 22:11:53 [scrapy.core.engine] INFO: Closing spider (finished) 2020-11-03 22:11:53 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 455, 'downloader/request_count': 2, 'downloader/request_method_count/GET': 2, 'downloader/response_bytes': 6065, 'downloader/response_count': 2, 'downloader/response_status_count/200': 1, 'downloader/response_status_count/404': 1, 'elapsed_time_seconds': 0.593427, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2020, 11, 3, 22, 11, 53, 534397), 'log_count/DEBUG': 2, 'log_count/INFO': 10, 'memusage/max': 49852416, 'memusage/startup': 49852416, 'response_received_count': 2, 'robotstxt/request_count': 1, 'robotstxt/response_count': 1, 'robotstxt/response_status_count/404': 1, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'start_time': datetime.datetime(2020, 11, 3, 22, 11, 52, 940970)} 2020-11-03 22:11:53 [scrapy.core.engine] INFO: Spider closed (finished)

Upvotes: 0

Views: 705

Answers (2)

gangabass
gangabass

Reputation: 10666

Your output shows that you have crawled two pages:

http://books.toscrape.com/robots.txt (HTTP status 404 error)
http://books.toscrape.com// (HTTP status 200)

It looks like everything works (except I don't see you print statement in your outout).

Upvotes: 1

Jordan Casey
Jordan Casey

Reputation: 1011

Looks like there's no robots.txt on the site you're scraping.

You can disable obeying robots.txt by going to the settings.py of scrapy and find ROBOTSTXT_OBEY. Set this to false.

Upvotes: 0

Related Questions