Zimby
Zimby

Reputation: 61

Spider not scraping page/writing

I am using the following code to scrape data using scrapey:

from scrapy.selector import Selector
from scrapy.spider import Spider


class ExampleSpider(Spider):
    name = "example"
    allowed_domains = ["dmoz.org"]
    start_urls = [
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
    ]

    def parse(self, response):
        sel = Selector(response)
        for li in sel.xpath('//ul/li'):
            title = li.xpath('a/text()').extract()
            link = li.xpath('a/@href').extract()
            desc = li.xpath('text()').extract()
            print title, link, desc

However, when I run this spider, I get the following message:

2014-06-30 23:39:00-0500 [scrapy] INFO: Scrapy 0.24.1 started (bot: tutorial)
2014-06-30 23:39:00-0500 [scrapy] INFO: Optional features available: ssl, http11
2014-06-30 23:39:00-0500 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'FEED_FORMAT': 'csv', 'SPIDER_MODULES': ['tutorial.spiders'], 'FEED_URI': 'willthiswork.csv', 'BOT_NAME': 'tutorial'}
2014-06-30 23:39:01-0500 [scrapy] INFO: Enabled extensions: FeedExporter, LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-06-30 23:39:01-0500 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-06-30 23:39:01-0500 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-06-30 23:39:01-0500 [scrapy] INFO: Enabled item pipelines: 
2014-06-30 23:39:01-0500 [example] INFO: Spider opened
2014-06-30 23:39:01-0500 [example] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-06-30 23:39:01-0500 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2014-06-30 23:39:01-0500 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080
2014-06-30 23:39:01-0500 [example] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/> (referer: None)

Of note is the line "Crawled 0 pages (at 0 pages/min....., as well as the overridden settings.

Additionally, the file I intended to write my data to is completely blank.

Is there something I am doing wrong that is causing data not to write?

Upvotes: 2

Views: 1520

Answers (1)

Frederic Bazin
Frederic Bazin

Reputation: 1529

I am assuming you are trying to use scrapy crawl tutorial -o myfile.json

To make this work, you need to use scrapy items.

add the following to items.py:

def MozItem(Item):
    title = Field()
    link = Field()
    desc = Field()

and adjust the parse function

    def parse(self, response):
        sel = Selector(response)
        item = MozItem()
        for li in sel.xpath('//ul/li'):
            item['title'] = li.xpath('a/text()').extract()
            item['link'] = li.xpath('a/@href').extract()
            item['desc'] = li.xpath('text()').extract()
            yield item

Upvotes: 1

Related Questions