Sohan Das
Sohan Das

Reputation: 1620

Scrapy not parsing items

I'm trying to scrape a web page with pegination but call back not parsing the items, any help would be appreciated....here is the code

# -*- coding: utf-8 -*-
import scrapy
from ..items import EscrotsItem

class Escorts(scrapy.Spider):
    name = 'escorts'
    allowed_domains = ['www.escortsandbabes.com.au']
    start_urls = ['https://escortsandbabes.com.au/Directory/ACT/Canberra/2600/Any/All/']

    def parse_links(self, response):
        for i in response.css('.btn.btn-default.btn-block::attr(href)').extract()[2:]:
            yield scrapy.Request(url=response.urljoin(i),callback=self.parse)
        NextPage = response.css('.page.next-page::attr(href)').extract_first()
        if NextPage:
            yield scrapy.Request(
                url=response.urljoin(NextPage),
                callback=self.parse_links)

    def parse(self, response):
        for x in response.xpath('//div[@class="advertiser-profile"]'):
            item = EscrotsItem()
            item['Name'] = x.css('.advertiser-names--display-name::text').extract_first()
            item['Username'] = x.css('.advertiser-names--username::text').extract_first()
            item['Phone'] = x.css('.contact-number::text').extract_first()
            yield item

Upvotes: 0

Views: 396

Answers (1)

vezunchik
vezunchik

Reputation: 3717

Your code calls urls from start_urls and goes to parse function. Since there is no any div.advertiser-profile elements, it really should close without any results. So your parse_links function is not called at all.

Change your functions names:

import scrapy


class Escorts(scrapy.Spider):
    name = 'escorts'
    allowed_domains = ['escortsandbabes.com.au']
    start_urls = ['https://escortsandbabes.com.au/Directory/ACT/Canberra/2600/Any/All/']

    def parse(self, response):
        for i in response.css('.btn.btn-default.btn-block::attr(href)').extract()[2:]:
            yield scrapy.Request(response.urljoin(i), self.parse_links)
        next_page = response.css('.page.next-page::attr(href)').get()
        if next_page:
            yield scrapy.Request(response.urljoin(next_page))

    def parse_links(self, response):
        for x in response.xpath('//div[@class="advertiser-profile"]'):
            item = {}
            item['Name'] = x.css('.advertiser-names--display-name::text').get()
            item['Username'] = x.css('.advertiser-names--username::text').get()
            item['Phone'] = x.css('.contact-number::text').get()
            yield item

My logs from scrapy shell:

In [1]: fetch("https://escortsandbabes.com.au/Directory/ACT/Canberra/2600/Any/All/")
2019-03-29 15:22:56 [scrapy.core.engine] INFO: Spider opened
2019-03-29 15:23:00 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://escortsandbabes.com.au/Directory/ACT/Canberra/2600/Any/All/> (referer: None, latency: 2.48 s)

In [2]: response.css('.page.next-page::attr(href)').get()
Out[2]: u'/Directory/ACT/Canberra/2600/Any/All/?p=2'

Upvotes: 1

Related Questions