rahmatheruka
rahmatheruka

Reputation: 57

Scrapy, How to still get the content with status 302 (redirecting)

this is my simple spider code (just started):

def start_requests(self):
    urls = [
        'http://www.liputan6.com/search?q=bubarkan+hti&type=all',
    ]
    for url in urls:
        yield scrapy.Request(url=url, callback=self.parse)

def parse(self, response):
    page = response.url.split("/")[-2]
    filename = 'quotes-%s.html' % page
    with open(filename, 'wb') as f:
        f.write(response.body)
    self.log('Saved file %s' % filename)

with browser I can access the url 'http://www.liputan6.com/search?q=bubarkan+hti&type=all' normally. But why with this scrapy I get 302 response, and I fail to crawling the page..

please anyone tell me, how to fix it..

Upvotes: 0

Views: 1595

Answers (1)

Granitosaurus
Granitosaurus

Reputation: 21446

Seems like the webpage is expecting some cookies, if those cookies are not found it redirects to index page.

I got it working by adding these cookies: js_enabled=true; is_cookie_active=true;:

$scrapy shell "http://www.liputan6.com/search?q=bubarkan+hti&type=all"
# redirect happens
>[1]: response.url
<[1]: 'http://www.liputan6.com'
# add cookie to request:
>[2]: request.headers['Cookie'] = 'js_enabled=true; is_cookie_active=true;'
>[3]: fetch(request)
# redirect no longer happens
>[4]: response.url
<[4]: 'http://www.liputan6.com/search?q=bubarkan+hti&type=all'

Edit: For your code try:

 def start_requests(self):
    urls = [
        'http://www.liputan6.com/search?q=bubarkan+hti&type=all',
    ]
    for url in urls:
        req= scrapy.Request(url=url, callback=self.parse)
        req.headers['Cookie'] = 'js_enabled=true; is_cookie_active=true;'
        yield req

def parse(self, response):   
    # 200 response here

Upvotes: 1

Related Questions