Reputation: 123
I'm running scrapy from a script but all it does is activate the spider. It doesn't go through my item pipeline. I've read http://scrapy.readthedocs.org/en/latest/topics/practices.html but it doesn't say anything about including pipelines.
My setup:
Scraper/
scrapy.cfg
ScrapyScript.py
Scraper/
__init__.py
items.py
pipelines.py
settings.py
spiders/
__init__.py
my_spider.py
My script:
from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy.settings import Settings
from scrapy import log, signals
from Scraper.spiders.my_spider import MySpiderSpider
spider = MySpiderSpider(domain='myDomain.com')
settings = get_project_settings
crawler = Crawler(Settings())
crawler.signals.connect(reactor.stop, signal=signals.spider_closed)
crawler.configure()
crawler.crawl(spider)
crawler.start()
log.start()
log.msg('Reactor activated...')
reactor.run()
log.msg('Reactor stopped.')
My pipeline:
from scrapy.exceptions import DropItem
from scrapy import log
import sqlite3
class ImageCheckPipeline(object):
def process_item(self, item, spider):
if item['image']:
log.msg("Item added successfully.")
return item
else:
del item
raise DropItem("Non-image thumbnail found: ")
class StoreImage(object):
def __init__(self):
self.db = sqlite3.connect('images')
self.cursor = self.db.cursor()
try:
self.cursor.execute('''
CREATE TABLE IMAGES(IMAGE BLOB, TITLE TEXT, URL TEXT)
''')
self.db.commit()
except sqlite3.OperationalError:
self.cursor.execute('''
DELETE FROM IMAGES
''')
self.db.commit()
def process_item(self, item, spider):
title = item['title'][0]
image = item['image'][0]
url = item['url'][0]
self.cursor.execute('''
INSERT INTO IMAGES VALUES (?, ?, ?)
''', (image, title, url))
self.db.commit()
Output of the script:
[name@localhost Scraper]$ python ScrapyScript.py
2014-08-06 17:55:22-0400 [scrapy] INFO: Reactor activated...
2014-08-06 17:55:22-0400 [my_spider] INFO: Closing spider (finished)
2014-08-06 17:55:22-0400 [my_spider] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 213,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 18852,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2014, 8, 6, 21, 55, 22, 518492),
'item_scraped_count': 51,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2014, 8, 6, 21, 55, 22, 363898)}
2014-08-06 17:55:22-0400 [my_spider] INFO: Spider closed (finished)
2014-08-06 17:55:22-0400 [scrapy] INFO: Reactor stopped.
[name@localhost Scraper]$
Upvotes: 12
Views: 8709
Reputation: 3120
@Pawel's and the docs' solution was not working for me and, after looking at Scrapy's source code, I realized that in some cases it was not identifying the settings module correctly. I was wondering why the pipelines were not being used until I realized that they were never found from the script in the first place.
As the docs and Pawel state, I was using:
from scrapy.utils.project import get_project_settings
settings = get_project_settings()
crawler = Crawler(settings)
but, when calling:
print "these are the pipelines:"
print crawler.settings.__dict__['attributes']['ITEM_PIPELINES']
I got:
these are the pipelines:
<SettingsAttribute value={} priority=0>
settings
wasn't getting properly populated.
I realized that what is required is a path to the project's settings module, relative to the module containing the script that calls Scrapy e.g. scrapy.myproject.settings
. Then, I created the Settings()
object as follows:
from scrapy.settings import Settings
settings = Settings()
os.environ['SCRAPY_SETTINGS_MODULE'] = 'scraper.edx_bot.settings'
settings_module_path = os.environ['SCRAPY_SETTINGS_MODULE']
settings.setmodule(settings_module_path, priority='project')
The complete code I used, which effectively imported the pipelines, is:
from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy import log, signals
from scrapy.settings import Settings
from scrapy.utils.project import get_project_settings
from scrapy.myproject.spiders.first_spider import FirstSpider
spider = FirstSpider()
settings = Settings()
os.environ['SCRAPY_SETTINGS_MODULE'] = 'scrapy.myproject.settings'
settings_module_path = os.environ['SCRAPY_SETTINGS_MODULE']
settings.setmodule(settings_module_path, priority='project')
crawler = Crawler(settings)
crawler.signals.connect(reactor.stop, signal=signals.spider_closed)
crawler.configure()
crawler.crawl(spider)
crawler.start()
log.start(loglevel=log.INFO)
reactor.run()
Upvotes: 25
Reputation: 7822
You need to actually call get_project_settings, Settings object that you are passing to your crawler in your posted code will give you defaults, not your specific project settings. You need to write something like this:
from scrapy.utils.project import get_project_settings
settings = get_project_settings()
crawler = Crawler(settings)
Upvotes: 13