Reputation: 87210
I'm writing a simple web crawler in Ruby and I need to fetch all href
contents on the page. What is the best way to do this, or any other web page source parsing, since some pages might not be valid, but I still want to be able to parse them.
Are there any good Ruby HTML parsers that allow validity agnostic parsing, or is the best way just to do it by hand with regexp?
Is it possible to use XPath on non-XHTML page?
Upvotes: 1
Views: 408
Reputation: 7783
Take a look at Mechanize. I'm pretty sure it has methods for grabbing all links in a page.
Upvotes: 1