tesla john
tesla john

Reputation: 350

No adapter found for objects of type: 'itemadapter.adapter.ItemAdapter'

I want to change the names of images downloaded from a webpage. I want to use standard names given by the website as opposed to cleaning the request url for it.

I have the following pipeline.py

from itemadapter import ItemAdapter
from scrapy.pipelines.images import ImagesPipeline

class ScrapyExercisesPipeline:
    def process_item(self, item, spider):
        adapter = ItemAdapter(item)
        return adapter

class DownfilesPipeline(ImagesPipeline):
    def file_path(self, request, response=None, info=None, item=None):
        adapter = ScrapyExercisesPipeline().process_item()[0]
        image_name: str = f'{adapter}.jpg'
        return image_name

This produces the following error:

raise TypeError(f"No adapter found for objects of type: {type(item)} ({item})") TypeError: No adapter found for objects of type: <class 'itemadapter.adapter.ItemAdapter'> (<ItemAdapter for ScrapyExercisesItem(name='unknown267', images=['https://bl-web-assets.britishland.com/live/meadowhall/s3fs-public/styles/retailer_thumbnail/public/retailer/boots_1.jpg?qQ.NHRs04tdmGxoyZKerRHcrhCImB3JH&itok=PD5LxLmS&cb=1657061667-curtime&v=1657061667-curtime'])>)

scraper.py:

import scrapy
from scrapy_exercises.items import ScrapyExercisesItem

class TestSpider(scrapy.Spider):
    name = 'test'
    #allowed_domains = ['x']
    start_urls = ['https://www.meadowhall.co.uk/eatdrinkshop?page=1']

    def start_requests(self):
        for url in self.start_urls:
            yield scrapy.Request(
                url=url,
                callback=self.parse,
                cb_kwargs = {'pg':0}
            )
    def parse(self, response,pg):
        pg=0
        content_page = response.xpath("//div[@class='view-content']//div")
        for cnt in content_page:
            image_url = cnt.xpath(".//img//@src").get()
            image_name = cnt.xpath(".//img//@alt").get()
            if image_url != None:
                pg+=1
                items = ScrapyExercisesItem()
                if image_name == '':
                    items['name'] = 'unknown'+f'{pg}'
                    items['images'] = [image_url]
                    yield items
                else:
                    items['name'] = image_name
                    items['images'] = [image_url]
                    yield items

settings.py

ITEM_PIPELINES = {
    #'scrapy.pipelines.images.ImagesPipeline': 1,
    'scrapy_exercises.pipelines.ScrapyExercisesPipeline':45,
    'scrapy_exercises.pipelines.DownfilesPipeline': 55
    }
from pathlib import Path
import os
BASE_DIR = Path(__file__).resolve().parent.parent
IMAGES_STORE = os.path.join(BASE_DIR, 'images')
IMAGES_URLS_FIELD = 'images'
IMAGES_RESULT_FIELD = 'results'

Upvotes: 1

Views: 1488

Answers (1)

Alexander
Alexander

Reputation: 17335

You are calling on a pipeline from within your pipeline while that pipeline is also registered in your settings to be run as a pipeline. It would be simpler to just extract the name field from your item in your DownFilesPipeLine and return it.

Change your pipelines.py file to:

from itemadapter import ItemAdapter
from scrapy.pipelines.images import ImagesPipeline

class DownfilesPipeline(ImagesPipeline):
    def file_path(self, request, response=None, info=None, item=None):
        return item['name'] + '.jpg'

You also need to turn off the ScrapyExercisesPipeline in your settings

Upvotes: 1

Related Questions