Reputation: 22440
With the way I wrote code here I got results from different sites but for some reason this site throws error. As I'm a new coder in scrapy, I haven't got the capability to settle the issue myself. Xpaths are allright. I'm attaching what I see in the terminal along with the code:
items.py
import scrapy
class OlxItem(scrapy.Item):
Title = scrapy.Field()
Url = scrapy.Field()
olxsp.py
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
class OlxspSpider(CrawlSpider):
name = "olxsp"
allowed_domains = ['olx.com.pk']
start_urls = ['https://www.olx.com.pk/']
rules = [Rule(LinkExtractor(restrict_xpaths='//div[@class="lheight16 rel homeIconHeight"]')),
Rule(LinkExtractor(restrict_xpaths='//li[@class="fleft tcenter"]'),
callback='parse_items', follow=True)]
def parse_items(self, response):
page=response.xpath('//h3[@class="large lheight20 margintop10"]')
for post in page:
AA=post.xpath('.//a[@class="marginright5 link linkWithHash detailsLink"]/span/text()').extract()
CC=post.xpath('.//a[@class="marginright5 link linkWithHash detailsLink"]/@href').extract()
yield {'Title':AA,'Url':CC}
settings.py
BOT_NAME = 'olx'
SPIDER_MODULES = ['olx.spiders']
NEWSPIDER_MODULE = 'olx.spiders'
ROBOTSTXT_OBEY = True
image of the terminal after scrapy done running:
Upvotes: 1
Views: 532
Reputation: 18799
You have ROBOTSTXT_OBEY = True
which tells scrapy to check for the robots.txt
file of the domains it crawls, so it can determine how to be polite to those sites.
You are allowing a different domain in allowed_domains = ['www.olx.com']
than the one you are actually crawling. If you are only going to crawl olx.com.pk
sites, changed the allowed_domains
to ['olx.com.pk']
. If you don't actually know which sites you are crawling, just remove the allowed_domains
attribute.
Upvotes: 1