user1592380
user1592380

Reputation: 36267

How to pass parameter to a scrapy pipeline object

After scraping some data with a Scrapy spider:

class Test_Spider(Spider):

    name = "test"
    def start_requests(self):
        for i in range(900,902,1):
            ........
            yield item

I pass the data to a pipeline object to be written to an SQLite table using SQLAlchemy::

class SQLlitePipeline(object):

    def __init__(self):
        _engine = create_engine("sqlite:///data.db")
        _connection = _engine.connect()
        _metadata = MetaData()
        _stack_items = Table("table1", _metadata,
                             Column("id", Integer, primary_key=True),
                             Column("detail_url", Text),
        _metadata.create_all(_engine)
        self.connection = _connection
        self.stack_items = _stack_items

    def process_item(self, item, spider):
        is_valid = True

I'd like to be able to set the table name as a variable instead of being hardcoded in as it is now "table1". How can this be done?

Upvotes: 7

Views: 5111

Answers (3)

daaawx
daaawx

Reputation: 3473

A simpler way to do this is to pass the argument on crawl:

scrapy crawl -a table=table1

Then get the value with spider.table:

class TestScrapyPipeline(object):
    def process_item(self, item, spider):
        table = spider.table

Upvotes: 7

lucasnadalutti
lucasnadalutti

Reputation: 5948

Assuming you pass this parameter through the command line (e.g. -s table="table1"), define a from_crawler method.

@classmethod
def from_crawler(cls, crawler):
    # Here, you get whatever value was passed through the "table" parameter
    settings = crawler.settings
    table = settings.get('table')

    # Instantiate the pipeline with your table
    return cls(table)

def __init__(self, table):
    _engine = create_engine("sqlite:///data.db")
    _connection = _engine.connect()
    _metadata = MetaData()
    _stack_items = Table(table, _metadata,
                         Column("id", Integer, primary_key=True),
                         Column("detail_url", Text),
    _metadata.create_all(_engine)
    self.connection = _connection
    self.stack_items = _stack_items

Upvotes: 12

eLRuLL
eLRuLL

Reputation: 18799

class SQLlitePipeline(object):

    def __init__(self, table_name):

        _engine = create_engine("sqlite:///data.db")
        _connection = _engine.connect()
        _metadata = MetaData()
        _stack_items = Table(table_name, _metadata,
                             Column("id", Integer, primary_key=True),
                             Column("detail_url", Text),
        _metadata.create_all(_engine)
        self.connection = _connection
        self.stack_items = _stack_items

    @classmethod
    def from_crawler(cls, crawler):
        table_name = getattr(crawler.spider, 'table_name')
        return cls(table_name)

With from_crawler you can create or instantiate a pipeline object with the parameters you specify.

Upvotes: 5

Related Questions