e-vop
e-vop

Reputation: 21

scrapy CSV writing

Being new user I managed to make a spider can romper e- commerce site and extract Title and variations of each product and the output CSV file and a product line but what I will wish This is a variation by line, please can someone help me move forward in my project.

I'm looking forward to come to the question but unfortunately I can not find an answer.

my spider:

import scrapy
from w3lib.html import remove_tags
from products_crawler.items import ProductItem


class DemostoreSpider(scrapy.Spider):
    name = "demostore"
    allowed_domains = ["adns-grossiste.fr"]
    start_urls = [
         'http://adns-grossiste.fr/17-produits-recommandes',
]
download_delay = 0.5

def parse(self, response):
    for category_url in response.css('#categories_block_left > div > ul  > li ::attr(href)').extract():
        yield scrapy.Request(category_url, callback=self.parse_category, meta={'page_number': '1'})

def parse_category(self, response):
    for product_url in response.css('#center_column > ul > li > div > div.right-block > h5 > a ::attr(href)').extract():
        yield scrapy.Request(product_url, callback=self.parse_product)

def parse_product(self, response):
    item = ProductItem()
    item['url'] = response.url
    item['title'] = response.css('#center_column > div >   div.primary_block.clearfix > div.pb-center-column.col-xs-12.col-sm-7.col- md-7.col-lg-7 > h1 ::text').extract_first()
    item['Déclinaisons'] = remove_tags(response.css('#d_c_1852 > tbody   >tr.combi_1852.\31 852_155.\31 852_26.odd > td.tl.sorting_1 > a > span  ::text').extract_first() or '')
    yield item

sample CSV wish to: image CSV

Upvotes: 2

Views: 1372

Answers (1)

Granitosaurus
Granitosaurus

Reputation: 21436

Check out official docummentation here

In short there are two approaches, the simpliest one would be just to use crawl command argument --output or -o in short. For example:

scrapy crawl myspider -o myspider.csv

Scrapy will automatically convert yielded items into a csv file. For more detailed approach check out the documentation page posted at the beginning.

Upvotes: 2

Related Questions