Bak
Bak

Reputation: 365

Scrapy states that no pages/items have been crawled?

My spider is currently scraping an xml from a website. It is successful in doing so because I can see the items being stored through the database pipeline.

However, when I look at the log (set to log.INFO), it states that nothing was crawled?

2013-04-12 11:58:00-0400 [traffics] INFO: Spider opened
2013-04-12 11:58:00-0400 [traffics] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2013-04-12 11:58:03-0400 [traffics] INFO: Closing spider (finished)
2013-04-12 11:58:03-0400 [traffics] INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 273,
     'downloader/request_count': 1,
     'downloader/request_method_count/GET': 1,
     'downloader/response_bytes': 28883,
     'downloader/response_count': 1,
     'downloader/response_status_count/200': 1,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2013, 4, 12, 15, 58, 3, 469842),
     'log_count/DEBUG': 7,
     'log_count/INFO': 4,
     'response_received_count': 1,
     'scheduler/dequeued': 1,
     'scheduler/dequeued/memory': 1,
     'scheduler/enqueued': 1,
     'scheduler/enqueued/memory': 1,
     'start_time': datetime.datetime(2013, 4, 12, 15, 58, 0, 907300)}
2013-04-12 11:58:03-0400 [traffics] INFO: Spider closed (finished)

Why does it say 0 items and 0 pages crawled when it is definitely crawling (and subsequently saving them to the db)?

Upvotes: 1

Views: 366

Answers (1)

Drover
Drover

Reputation: 116

Is your process_item method in the database pipeline returning the item after it has been stored?

Upvotes: 1

Related Questions