Reputation: 67
I'm kinda of newb with Scrapy. My spider is not working properly when I'm trying to scrape the data from forum. When I'm running my spider, it gives me only the printed urls and stops after. So I think that the problem is in compatibility of two function parse and parse_data but I may be wrong. Here is my code:
import scrapy, time
class ForumSpiderSpider(scrapy.Spider):
name = 'forum_spider'
allowed_domains = ['visforvoltage.org/latest_tech/']
start_urls = ['http://visforvoltage.org/latest_tech//']
def parse(self, response):
for href in response.css(r"tbody a[href*='/forum/']::attr(href)").extract():
url = response.urljoin(href)
print(url)
req = scrapy.Request(url, callback=self.parse_data)
time.sleep(10)
yield req
def parse_data(self, response):
for url in response.css('html').extract():
data = {}
data['name'] = response.css(r"div[class='author-pane-line author-name'] span[class='username']::text").extract()
data['date'] = response.css(r"div[class='forum-posted-on']:contains('-') ::text").extract()
data['title'] = response.css(r"div[class='section'] h1[class='title']::text").extract()
data['body'] = response.css(r"div[class='field-items'] p::text").extract()
yield data
next_page = response.css(r"li[class='pager-next'] a[href*='page=']::attr(href)").extract()
if next_page:
yield scrapy.Request(
response.urljoin(next_page),
callback=self.parse)
Here is the output:
2020-07-23 23:09:58 [scrapy.spidermiddlewares.offsite] DEBUG: Filtered offsite request to 'visforvoltage.org': <GET https://visforvoltage.org/forum/14521-aquired-a123-m1-cells-need-charger-and-bms>
https://visforvoltage.org/forum/14448-battery-charger-problems
https://visforvoltage.org/forum/14191-vectrix-trickle-charger
https://visforvoltage.org/forum/14460-what-epoxy-would-you-recommend-loose-magnet-repair
https://visforvoltage.org/forum/14429-importance-correct-grounding-and-well-built-plugs
https://visforvoltage.org/forum/14457-147v-charger-24v-lead-acid-charger-and-dying-vectrix-cells
https://visforvoltage.org/forum/6723-lithium-safety-e-bike
https://visforvoltage.org/forum/11488-how-does-24v-4-wire-reversible-motor-work
https://visforvoltage.org/forum/14444-new-sevcon-gen-4-80v-sale
https://visforvoltage.org/forum/14443-new-sevcon-gen-4-80v-sale
https://visforvoltage.org/forum/12495-3500w-hub-motor-question-about-real-power-and-breaker
https://visforvoltage.org/forum/14402-vectrix-vx-1-battery-pack-problem
https://visforvoltage.org/forum/14068-vectrix-trickle-charger
https://visforvoltage.org/forum/2931-drill-motors
https://visforvoltage.org/forum/14384-help-repairing-gio-hub-motor-freewheel-sprocket
https://visforvoltage.org/forum/14381-zev-charger
https://visforvoltage.org/forum/8726-performance-unite-my1020-1000w-motor
https://visforvoltage.org/forum/7012-controler-mod-veloteq
https://visforvoltage.org/forum/14331-scooter-chargers-general-nfpanec
https://visforvoltage.org/forum/14320-charging-nissan-leaf-cells-lifepo4-charger
https://visforvoltage.org/forum/3763-newber-needs-help-new-gift-kollmorgan-hub-motor
https://visforvoltage.org/forum/14096-european-bldc-controller-seller
https://visforvoltage.org/forum/14242-lithium-bms-vs-manual-battery-balancing
https://visforvoltage.org/forum/14236-mosfet-wiring-ignition-key
https://visforvoltage.org/forum/2007-ok-dumb-question-time%3A-about-golf-cart-controllers
https://visforvoltage.org/forum/10524-my-mf70-recommended-powerpoles-arrived-today
https://visforvoltage.org/forum/9460-how-determine-battery-capacity
https://visforvoltage.org/forum/7705-tricking-0-5-v-hall-effect-throttle
https://visforvoltage.org/forum/13446-overcharged-lead-acid-battery-what-do
https://visforvoltage.org/forum/14157-reliable-high-performance-battery-enoeco-bt-p380
https://visforvoltage.org/forum/2702-hands-test-48-volt-20-ah-lifepo4-pack-ping-battery
https://visforvoltage.org/forum/14034-simple-and-cheap-ev-can-bus-adaptor
https://visforvoltage.org/forum/13933-zivan-ng-3-charger-specs-and-use
https://visforvoltage.org/forum/13099-controllers
https://visforvoltage.org/forum/13866-electric-motor-werks-demos-25-kilowatt-diy-chademo-leaf
https://visforvoltage.org/forum/13796-motor-theory-ac-vs-bldc
https://visforvoltage.org/forum/6184-bypass-bms-lifepo4-good-idea-or-not
https://visforvoltage.org/forum/13763-positive-feedback-kelly-controller
https://visforvoltage.org/forum/13764-any-users-smart-battery-drop-replacement-zapino-and-others
https://visforvoltage.org/forum/13760-contactor-or-fuse-position-circuit-rules-why
https://visforvoltage.org/forum/13759-contactor-or-fuse-position-circuit-rules-why
https://visforvoltage.org/forum/12725-repairing-lithium-battery-pack
https://visforvoltage.org/forum/13752-questions-sepex-motor-theory
https://visforvoltage.org/forum/13738-programming-curtis-controller-software
https://visforvoltage.org/forum/13741-making-own-simple-controller
https://visforvoltage.org/forum/12420-idea-charging-electric-car-portably-wo-relying-electricity-infrastructure
2020-07-23 23:17:28 [scrapy.extensions.logstats] INFO: Crawled 2 pages (at 2 pages/min), scraped 0 items (at 0 items/min)
2020-07-23 23:17:28 [scrapy.core.engine] INFO: Closing spider (finished)
As I see it didn't iterate over these links and collect the data from them. What could be the reason for that? I will really appreciate for any help. Thank you!
Upvotes: 1
Views: 313
Reputation: 2564
The issue probably is that the requests are getting filtered, as they are not part of the allowed domain.
allowed_domains = ['visforvoltage.org/latest_tech/']
New requests url:
https://visforvoltage.org/forum/14448-battery-charger-problems
https://visforvoltage.org/forum/14191-vectrix-trickle-charger
...
Since the requests are to the url visforvoltage.org/forum/
and not to the visforvoltage.org/latest_tech/
You can remove the allowed domain property entirely, or change to:
allowed_domains = ['visforvoltage.org']
This will make them crawl the page, you will see a different value in this line in your log:
2020-07-23 23:17:28 [scrapy.extensions.logstats] INFO: Crawled 2 pages (at 2 pages/min), scraped 0 items (at 0 items/min)
However the selectors in the parsing don't seem right.
This selector will select the whole page, and the extract()
method will return it as a list. So you will have a list, with only one string that composed of all the HTML of the page.
response.css('html').extract()
You can read more on selectors and the getall()
/extract()
method here.
Upvotes: 0
Reputation: 2609
It's work for me.
import scrapy, time
class ForumSpiderSpider(scrapy.Spider):
name = 'forum_spider'
allowed_domains = ['visforvoltage.org/latest_tech/']
start_urls = ['http://visforvoltage.org/latest_tech/']
def parse(self, response):
for href in response.css(r"tbody a[href*='/forum/']::attr(href)").extract():
url = response.urljoin(href)
req = scrapy.Request(url, callback=self.parse_data, dont_filter=True)
yield req
def parse_data(self, response):
for url in response.css('html'):
data = {}
data['name'] = url.css(r"div[class='author-pane-line author-name'] span[class='username']::text").extract()
data['date'] = url.css(r"div[class='forum-posted-on']:contains('-') ::text").extract()
data['title'] = url.css(r"div[class='section'] h1[class='title']::text").extract()
data['body'] = url.css(r"div[class='field-items'] p::text").extract()
yield data
next_page = response.css(r"li[class='pager-next'] a[href*='page=']::attr(href)").extract()
if next_page:
yield scrapy.Request(
response.urljoin(next_page),
callback=self.parse)
Upvotes: 1