Reputation: 338
I have a Scrapy project that loads the pipelines but doesn't pass items to them. Any help is appreciated.
A stripped down version of the spider:
#imports
class MySpider(CrawlSpider):
#RULES AND STUFF
def parse_item(self, response):
'''Takes HTML response and turns it into an item ready for database. I hope.
'''
#A LOT OF CODE
return item
At this point printing out the item produces the anticipated result and settings.py is straightforward enough:
ITEM_PIPELINES = [
'mySpider.pipelines.MySpiderPipeline',
'mySpider.pipelines.PipeCleaner',
'mySpider.pipelines.DBWriter',
]
and the pipeline seems correct (sans imports):
class MySpiderPipeline(object):
def process_item(self, item, spider):
print 'PIPELINE: got ', item['name']
return item
class DBWriter(object):
"""Writes each item to a DB. I hope.
"""
def __init__(self):
self.dbpool = adbapi.ConnectionPool('MySQLdb'
, host=settings['HOST']
, port=int(settings['PORT'])
, user=settings['USER']
, passwd=settings['PASS']
, db=settings['BASE']
, cursorclass=MySQLdb.cursors.DictCursor
, charset='utf8'
, use_unicode=True
)
print('init DBWriter')
def process_item(self, item, spider):
print 'DBWriter process_item'
query = self.dbpool.runInteraction(self._insert, item)
query.addErrback(self.handle_error)
return item
def _insert(self, tx, item):
print 'DBWriter _insert'
# A LOT OF UNRELATED CODE HERE
return item
class PipeCleaner(object):
def __init__(self):
print 'Cleaning these pipes.'
def process_item(self, item, spider):
print item['name'], ' is cleeeeaaaaannn!!'
return item
When I run the spider, I get this output at startup:
Cleaning these pipes.
init DBWriter
2012-10-23 15:30:04-0400 [scrapy] DEBUG: Enabled item pipelines: MySpiderPipeline, PipeCleaner, DBWriter
Unlike their init clauses that do print to the screen when the crawler is started, the process_item methods are not printing (or processing) anything. I'm crossing my fingers that I've forgotten something very simple.
Upvotes: 3
Views: 3388
Reputation: 61
"Better late than never"
#imports
class MySpider(CrawlSpider):
#RULES AND STUFF
def parse_item(self, response):
'''Takes HTML response and turns it into an item ready for database. I hope.
'''
#A LOT OF CODE
yield item <------- yield instead of return
Upvotes: 1
Reputation: 4085
2012-10-23 15:30:04-0400 [scrapy] DEBUG: Enabled item pipelines: MySpiderPipeline, PipeCleaner, DBWriter
this line shows that your pipeline are initializing and they are ok.
problem is is your crawler class ,
class MySpider(CrawlSpider):
#RULES AND STUFF
def parse_item(self, response):
'''Takes HTML response and turns it into an item ready for database. I hope.
'''
#A LOT OF CODE
# before returning item , print it
return item
i think you should print an item , before returning it from MySpider.
Upvotes: 1