Vasyl Boliuk
Vasyl Boliuk

Reputation: 13

Scrapy, Parse items data from page then follow link to get additional items data

I have a problem to scrape additional fields which are on other pages after scraped data from first page e.g:

Here is my code:

from scrapy.selector import HtmlXPathSelector
from scrapy.http import HtmlResponse
from IMDB_Frompage.items import ImdbFrompageItem
from scrapy.http import Request
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor

URL = "http://www.imdb.com/search/title?count=100&ref_=nv_ch_mm_1&start=1&title_type=feature,tv_series,tv_movie"

class MySpider(CrawlSpider):
    name = "imdb"
    allowed_domains = ["imdb.com"]
    start_urls = [URL]
    DOWNLOAD_DELAY = 0.5

    rules = (Rule(SgmlLinkExtractor(allow=('100&ref'), restrict_xpaths=('//span[@class="pagination"]/a[contains(text(),"Next")]')), callback='parse_page', follow=True),)

    def parse_page(self, response):
        hxs = HtmlXPathSelector(response)
        item = ImdbFrompageItem()
        links = hxs.select("//td[@class='title']")
        items=[]
        for link in links:
            item = ImdbFrompageItem()
            item['link'] = link.select("a/@href").extract()[0]
            item['new_link'] ='http://www.imdb.com'+item['link']
            new_links = ''.join(item['new_link'])
            request = Request(new_links, callback=self.parsepage2)
            request.meta['item'] = item
            yield request
            yield item

    def parsepage2(self, response):
        item = response.meta['item']
        hxs = HtmlXPathSelector(response)
        blocks = hxs.select("//td[@id='overview-top']")
        for block in blocks:
            item = ImdbFrompageItem()
            item["title"] = block.select("h1[@class='header']/span[@itemprop='name']/text()").extract()
            item["year"] = block.select("h1[@class='header']/span[@class='nobr']").extract()
            item["description"] = block.select("p[@itemprop='description']/text()").extract()
            yield item

Part of results is:

{"link": , "new_link": }
{"link": , "new_link": }
{"link": , "new_link": }
{"link": , "new_link": }
....
{"link": , "new_link": }
{"title": , "description":}
{"title": , "description":}
next page
{"link": , "new_link": }
{"link": , "new_link": }
{"link": , "new_link": }
{"title": , "description":}

My results don't contain all data ({"title": , "description":}) for each link

But I want something like that:

{"link": , "new_link": }
{"title": , "description":}
{"link": , "new_link": }
{"title": , "description":}
{"link": , "new_link": }
{"title": , "description":}
{"link": , "new_link": }
....
{"link": , "new_link": }
{"title": , "description":}
next page
{"link": , "new_link": }
{"title": , "description":}
{"link": , "new_link": }
{"title": , "description":}
{"link": , "new_link": }
{"title": , "description":}

Any suggestions as to what I am doing wrong?

Upvotes: 1

Views: 2794

Answers (1)

Jimmy Zhang
Jimmy Zhang

Reputation: 967

Scrapy can't ensure that all request parsing in order, it is unordered.

The execution sequence may be like that :

  1. call parse1();
  2. call parse1();
  3. call parse1();
  4. call parse2();
  5. ....

Maybe you can change your code like that to get what you want:

def parse_page(self, response):
    hxs = HtmlXPathSelector(response)
    links = hxs.select("//td[@class='title']")
    for link in links:
        new_links = ''.join('http://www.imdb.com'+item['link'])
        request = Request(new_links, callback=self.parsepage2)
        request.meta['item'] = item
        request.meta['link'] = link.select("a/@href").extract()[0]
        request.meta['new_link'] = new_links
        yield request


def parsepage2(self, response):
    item = response.meta['item']
    hxs = HtmlXPathSelector(response)
    blocks = hxs.select("//td[@id='overview-top']")
    for block in blocks:
        item = ImdbFrompageItem()
        item["link"] = response["link"]
        item["new_link" = response["new_link"]
        item["title"] = block.select("h1[@class='header']/span[@itemprop='name']/text()").extract()
        item["year"] = block.select("h1[@class='header']/span[@class='nobr']").extract()
        item["description"] = block.select("p[@itemprop='description']/text()").extract()

        yield item

So you will get result like that:

{"link": , "new_link": ,"title": , "description":}

I am not sure that my code can run directly, I just give an inspire for you to realize what you want.

Upvotes: 1

Related Questions