user7569725
user7569725

Reputation:

Scrapy: AttributeError: 'YourCrawler' object has no attribute 'parse_following_urls'

I am writing a scrapy spider. I have been reading this question: Scrapy: scraping a list of links, and I can make it recognise the urls in a listpage, but I cant make it go inside the urls and save the data I want to see.

from scrapy.contrib.spiders import CrawlSpider
from scrapy.selector import Selector
from scrapy.http import Request

class YourCrawler(CrawlSpider):
    name = "bookstore_2"
    start_urls = [
    'https://example.com/materias/?novedades=LC&p',
    ]

    def parse(self, response):
        # go to the urls in the list
        s = Selector(response)
        page_list_urls = s.xpath('///*[@id="results"]/ul/li/div[1]/h4/a[2]/@href').extract()
        for url in page_list_urls:
            yield Request(url, callback=self.parse_following_urls, dont_filter=True)

            # For the urls in the list, go inside, and in div#main, take the div.ficha > div.caracteristicas > ul > li
            def parse_following_urls(self, response):
                #Parsing rules go here
                for each_book in response.css('div#main'):
                    yield {
                    'book_isbn': each_book.css('div.ficha > div.caracteristicas > ul > li').extract(),
                    }
                    # Return back and go to bext page in div#paginat ul li.next a::attr(href) and begin again
                    next_page = response.css('div#paginat ul li.next a::attr(href)').extract_first()
                    if next_page is not None:
                        next_page = response.urljoin(next_page)
                        yield scrapy.Request(next_page, callback=self.parse)

It gives an error:

AttributeError: 'YourCrawler' object has no attribute 'parse_following_urls'

And I don't understand why!

EDIT --

As the response says, I had to close the method with the indentation like here:

from scrapy.contrib.spiders import CrawlSpider
from scrapy.selector import Selector
from scrapy.http import Request

class YourCrawler(CrawlSpider):
    name = "bookstore_2"
    start_urls = [
    'https://example.com/materias/?novedades=LC&p',
    ]

    def parse(self, response):
        # go to the urls in the list
        s = Selector(response)
        page_list_urls = s.xpath('///*[@id="results"]/ul/li/div[1]/h4/a[2]/@href').extract()
        for url in page_list_urls:
            yield Request(url, callback=self.parse_following_urls, dont_filter=True)

    # For the urls in the list, go inside, and in div#main, take the div.ficha > div.caracteristicas > ul > li
    def parse_following_urls(self, response):
        #Parsing rules go here
        for each_book in response.css('div#main'):
            yield {
            'book_isbn': each_book.css('div.ficha > div.caracteristicas > ul > li').extract(),
            }
            # Return back and go to bext page in div#paginat ul li.next a::attr(href) and begin again
            next_page = response.css('div#paginat ul li.next a::attr(href)').extract_first()
            if next_page is not None:
                next_page = response.urljoin(next_page)
                yield scrapy.Request(next_page, callback=self.parse)

But there is another problem, I think related to the urls, and now I am having this traceback:

Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/scrapy/utils/defer.py", line 102, in iter_errback
    yield next(it)
  File "/usr/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/offsite.py", line 29, in process_spider_output
    for x in result:
  File "/usr/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/referer.py", line 339, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "/usr/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/usr/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/depth.py", line 58, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/Users/nikita/scrapy/bookstore_2/bookstore_2/spiders/bookstore_2.py", line 16, in parse
    yield Request(url, callback=self.parse_following_urls, dont_filter=True)
  File "/usr/local/lib/python2.7/site-packages/scrapy/http/request/__init__.py", line 25, in __init__
    self._set_url(url)
  File "/usr/local/lib/python2.7/site-packages/scrapy/http/request/__init__.py", line 58, in _set_url
    raise ValueError('Missing scheme in request url: %s' % self._url)
ValueError: Missing scheme in request url: /book/?id=9780374281083

Maybe because I have to tell scrappy what is the base url? Should I add somewhere a urljoin?

EDIT_2 ---

Ok, the problem was with the urls. Adding

response.urljoin(

solved this issue.

Upvotes: 0

Views: 2815

Answers (1)

Park
Park

Reputation: 384

In your code,

  yield Request(url, callback=self.parse_following_urls, dont_filter=True)

you used parse_following_urls with self.
But parse_following_urls is defined in parse function, so it isn't a method of YourCrawler.
That's why the error said
AttributeError: 'YourCrawler' object has no attribute 'parse_following_urls'
you should assign it like:

class YourCrawler(CrawlSpider):
    def parse_following_urls(self, response):
    ....

to make it method of the class.

edit

for a additional question:

In your code s.xpath('///*[@id="results"]/ul/li/div[1]/h4/a[2]/@href') indicates the href attribute of the element a tag of the html page you want to scrap.
However, it is only '/book/?id=9780374281083', not the full url.
so, you should make it like : https://lacentral.com/book/?id=9780374281083 to use it.

Upvotes: 1

Related Questions