Reputation: 457
I am running the spider below, but it is not entering the parse method, I don't know why, Someone please help.
My code is below
from scrapy.item import Item, Field
from scrapy.selector import Selector
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
class MyItem(Item):
reviewer_ranking = Field()
print "asdadsa"
class MySpider(BaseSpider):
name = 'myspider'
allowed_domains = ["amazon.com"]
start_urls = ["http://www.amazon.com/gp/pdp/profile/A28XDLTGHPIWE1/ref=cm_cr_pr_pdp"]
print"sadasds"
def parse(self, response):
print"fggfggftgtr"
sel = Selector(response)
hxs = HtmlXPathSelector(response)
item = MyItem()
item["reviewer_ranking"] = hxs.select('//span[@class="a-size-small a-color-secondary"]/text()').extract()
return item
The output which I am getting is as below
$ scrapy runspider crawler_reviewers_data.py
asdadsa
sadasds
/home/raj/Documents/IIM A/Daily sales rank/Daily reviews/Reviews_scripts/Scripts_review/Reviews/Reviewer/crawler_reviewers_data.py:18: ScrapyDeprecationWarning: crawler_reviewers_data.MySpider inherits from deprecated class scrapy.spider.BaseSpider, please inherit from scrapy.spider.Spider. (warning only on first subclass, there may be others)
class MySpider(BaseSpider):
2014-06-24 19:21:35+0530 [scrapy] INFO: Scrapy 0.22.2 started (bot: scrapybot)
2014-06-24 19:21:35+0530 [scrapy] INFO: Optional features available: ssl, http11
2014-06-24 19:21:35+0530 [scrapy] INFO: Overridden settings: {}
2014-06-24 19:21:35+0530 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-06-24 19:21:35+0530 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, HttpProxyMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-06-24 19:21:35+0530 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-06-24 19:21:35+0530 [scrapy] INFO: Enabled item pipelines:
2014-06-24 19:21:35+0530 [myspider] INFO: Spider opened
2014-06-24 19:21:35+0530 [myspider] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-06-24 19:21:35+0530 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6027
2014-06-24 19:21:35+0530 [scrapy] DEBUG: Web service listening on 0.0.0.0:6084
2014-06-24 19:21:36+0530 [myspider] DEBUG: Crawled (403) <GET http://www.amazon.com/gp/pdp/profile/A28XDLTGHPIWE1/ref=cm_cr_pr_pdp> (referer: None) ['partial']
2014-06-24 19:21:36+0530 [myspider] INFO: Closing spider (finished)
2014-06-24 19:21:36+0530 [myspider] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 259,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 28487,
'downloader/response_count': 1,
'downloader/response_status_count/403': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2014, 6, 24, 13, 51, 36, 631236),
'log_count/DEBUG': 3,
'log_count/INFO': 7,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2014, 6, 24, 13, 51, 35, 472849)}
2014-06-24 19:21:36+0530 [myspider] INFO: Spider closed (finished)
Please help me, i am stuck at this very point.
Upvotes: 2
Views: 1429
Reputation: 473873
It is an anti-web-crawling technique used by Amazon
- you are getting 403 - Forbidden
because it requires User-Agent
header to be sent with the request.
One option would be to manually add it to the Request
yielded from start_requests()
:
class MySpider(BaseSpider):
name = 'myspider'
allowed_domains = ["amazon.com"]
def start_requests(self):
yield Request("https://www.amazon.com/gp/pdp/profile/A28XDLTGHPIWE1/ref=cm_cr_pr_pdp",
headers={'User-Agent': "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1"})
...
Another option would be to set the DEFAULT_REQUEST_HEADERS
setting project-wide.
Also note that Amazon
provides an API
which has a python wrapper, consider using it.
Hope that helps.
Upvotes: 3