Nina
Nina

Reputation: 211

Scrapy, Python: Multiple Item Classes in one pipeline?

I have a Spider that scrapes data which cannot be saved in one item class.

For illustration, I have one Profile Item, and each Profile Item might have an unknown number of Comments. That is why I want to implement Profile Item and Comment Item. I know I can pass them to my pipeline simply by using yield.

  1. However, I do not know how a pipeline with one parse_item function can handle two different item classes?

  2. Or is it possible to use different parse_item functions?

  3. Or do I have to use several pipelines?

  4. Or is it possible to write an Iterator to a Scrapy Item Field?


comments_list=[]
comments=response.xpath(somexpath)
for x in comments.extract():
        comments_list.append(x)
    ScrapyItem['comments'] =comments_list

Upvotes: 21

Views: 14368

Answers (7)

Rejected
Rejected

Reputation: 4491

By default every item goes through every pipeline.

For instance, if you yield a ProfileItem and a CommentItem, they'll both go through all pipelines. If you have a pipeline setup to tracks item types, then your process_item method could look like:

def process_item(self, item, spider):
    self.stats.inc_value('typecount/%s' % type(item).__name__)
    return item

When a ProfileItem comes through, 'typecount/ProfileItem' is incremented. When a CommentItem comes through, 'typecount/CommentItem' is incremented.

You can have one pipeline handle only one type of item request, though, if handling that item type is unique, by checking the item type before proceeding:

def process_item(self, item, spider):
    if not isinstance(item, ProfileItem):
        return item
    # Handle your Profile Item here.

If you had the two process_item methods above setup in different pipelines, the item will go through both of them, being tracked and being processed (or ignored on the second one).

Additionally you could have one pipeline setup to handle all 'related' items:

def process_item(self, item, spider):
    if isinstance(item, ProfileItem):
        return self.handle_profile(item, spider)
    if isinstance(item, CommentItem):
        return self.handle_comment(item, spider)

def handle_profile(self, item, spider):
    # Handle profile here, return item

def handle_comment(self, item, spider):
    # Handle Comment here, return item

Or, you could make it even more complex and develop a type delegation system that loads classes and calls default handler methods, similar to how Scrapy handles middleware/pipelines. It's really up to you how complex you need it, and what you want to do.

Upvotes: 26

Vitaliy
Vitaliy

Reputation: 29

I've come up with this solution.

  1. I created ITEM in setting.py file
ITEMS = {
    'project.items.Item1': {
        'filename': 'item1',
    },
    'project.items.Item2': {
        'filename': 'item2',
    },
}
  1. Imported settings in pipeline.py file
from scrapy.utils.project import get_project_settings
  1. In open_spider method for each item from setting create file and attach exporter
for settings_key in self.settings.keys():
    filename = os.path.join(f"output/{self.settings[settings_key]['filename']}_{self.dt}.csv")
    self.settings[settings_key]['file'] = open(filename, 'wb')
    self.settings[settings_key]['exporter'] = CsvItemExporter(
        self.settings[settings_key]['file'], 
        encoding='utf-8', 
        delimiter=';', 
        quoting=csv.QUOTE_NONNUMERIC
    )
    self.settings[settings_key]['exporter'].start_exporting()
  1. In close_spider method stop all exporters and close all files
for settings_key in self.settings.keys():
    self.settings[settings_key]['exporter'].finish_exporting()
    self.settings[settings_key]['file'].close()
  1. In process_item method just pick the item with proper exporter and export it
item_class = f"{type(item).__module__}.{type(item).__name__}"
settings_item = self.settings.get(item_class)
if settings_item:
    settings_item['exporter'].export_item(item)
return item

Upvotes: 1

Ikram Khan Niazi
Ikram Khan Niazi

Reputation: 807

I would suggest adding a comment in ProfileItem. This way you can add multiple comments in the profile of a single person. Secondly, it will be easier to process such type of data.

Upvotes: 0

quester
quester

Reputation: 564

from python>=3.10 https://www.python.org/dev/peps/pep-0622/

probably it will be convenient to implement router (@mdkb answer) based on structural pattern matching

!items are also legacy created classes since from python>=3.7 there are data classes

Upvotes: 0

mdkb
mdkb

Reputation: 402

@Rejected answer was the solution, but it needed some tweaks before it would work for me so sharing here. This is my pipeline.py:

from .items import MyFirstItem, MySecondItem # needed import of Items

    def process_item(self, item, spider):
        if isinstance(item, MyFirstItem):
            return self.handlefirstitem(item, spider) 
        if isinstance(item, MySecondItem):
            return self.handleseconditem(item, spider)

    def handlefirstitem(self, item, spider): # needed self added
        self.storemyfirst_db(item) # function to pipe it to database table
        return item

    def handleseconditem(self, item, spider): # needed self added
        self.storemysecond_db(item) # function to pipe it to database table
        return item

Upvotes: 5

gerosalesc
gerosalesc

Reputation: 3063

Defining multiple Items it's a tricky thing when you are exporting your data if they have a relation (Profile 1 -- N Comments for instance) and you have to export them together because each item in processed at different times by the pipelines. An alternative approach for this scenario is to define a Custom Scrapy Field for example:

class CommentItem(scrapy.Item):
    profile = ProfileField()

class ProfileField(scrapy.item.Field):
   # your business here

But given the scenario where you MUST have 2 items, it is highly suggested to use a different pipeline for each one of this types of items and also different exporter instances so that you get this information in different files (if you are using files):

settings.py

ITEM_PIPELINES = {
    'pipelines.CommentsPipeline': 1,
    'pipelines.ProfilePipeline': 1,
}

pipelines.py

class CommentsPipeline(object):
    def process_item(self, item, spider):
        if isinstance(item, CommentItem):
           # Your business here

class ProfilePipeline(object):
    def process_item(self, item, spider):
        if isinstance(item, ProfileItem):
           # Your business here

Upvotes: 9

Prune
Prune

Reputation: 77857

The straightforward way is to have the parser include two sub-parsers, one for each data type. The main parser determines the type from the input and passes the string to the appropriate subroutine.

A second approach is to include the parsers in sequence: one parses Profiles and ignores all else; the second parses Comments and ignores all else (same principle as above).

Does this move you forward?

Upvotes: 1

Related Questions