Maverick
Maverick

Reputation: 799

Spider not following links - scrapy

I am trying to build a spider which follows through 3 pages before getting to the page it scrapes. I have tested the responses in the shell, however, together it doesn't seem to work and I am not sure why.

My code below:

# -*- coding: utf-8 -*-
import scrapy


class CollegiateSpider(scrapy.Spider):
    name = 'Collegiate'
    allowed_domains = ['collegiate-ac.com/uk-student-accommodation']
    start_urls = ['http://collegiate-ac.com/uk-student-accommodation/']

    # Step 1 - Get the area links

    def parse(self, response):
        for city in response.xpath('//*[@id="top"]/div[1]/div/div[1]/div/ul/li/a/text').extract():
            yield scrapy.Request(response.urljoin("/" + city), callback = self.parse_area_page)

    # Step 2 - Get the block links

    def parse_area_page(self, response):
        for url in response.xpath('//div[3]/div/div/div/a/@href').extract():
            yield scrapy.Request(response.urljoin(url), callback=self.parse_unitpage)

    # Step 3 Get the room links 

    def parse_unitpage(self, response):
        for url in response.xpath('//*[@id="subnav"]/div/div[2]/ul/li[5]/a/@href').extract():
            yield scrapy.Request(response.urljoin(final), callback=self.parse_final)

    # Step 4 - Scrape the data

    def parse_final(self, response):
        pass

I have tried changing to Crawlspider as per this answer, but that didn't seem to help.

I am currently looking into how to debug spiders, however, struggling with that so thought it would be beneficial to get opinions on here as well.

Upvotes: 0

Views: 243

Answers (1)

furas
furas

Reputation: 142631

You forgot () in text() in '//*[@id="top"]/div[1]/div/div[1]/div/ul/li/a/text()'

But instead of text() I use @href to get url.

Joining urljoin('/' + city) creates wrong url because / skips /uk-student-accommodation - you have to use urljoin(city)

There was problem with allowed_domains - it blocked most of urls.


Working example. You can run it without project and it saves final urls in output.csv

import scrapy


class CollegiateSpider(scrapy.Spider):

    name = 'Collegiate'

    allowed_domains = ['collegiate-ac.com']

    start_urls = ['https://collegiate-ac.com/uk-student-accommodation/']

    # Step 1 - Get the area links

    def parse(self, response):
        for url in response.xpath('//*[@id="top"]/div[1]/div/div[1]/div/ul/li/a/@href').extract():
            url = response.urljoin(url)
            #print('>>>', url)
            yield scrapy.Request(url, callback=self.parse_area_page)

    # Step 2 - Get the block links

    def parse_area_page(self, response):
        for url in response.xpath('//div[3]/div/div/div/a/@href').extract():
            url = response.urljoin(url)
            yield scrapy.Request(response.urljoin(url), callback=self.parse_unitpage)

    # Step 3 Get the room links 

    def parse_unitpage(self, response):
        for url in response.xpath('//*[@id="subnav"]/div/div[2]/ul/li[5]/a/@href').extract():
            url = response.urljoin(url)
            yield scrapy.Request(url, callback=self.parse_final)

    # Step 4 - Scrape the data

    def parse_final(self, response):
        # show some information for test
        print('>>> parse_final:', response.url)
        # send url as item so it can save it in file
        yield {'final_url': response.url}

# --- run it without project ---

import scrapy.crawler 

c = scrapy.crawler.CrawlerProcess({
    "FEED_FORMAT": 'csv',
    "FEED_URI": 'output.csv'
})
c.crawl(CollegiateSpider)
c.start()

Upvotes: 2

Related Questions