Reputation: 13548
If I use
require 'net/http'
source = Net::HTTP.get('stackoverflow.com', '/index.html')
to extract the source code from a url, is there a way, in ruby, to find all link elements with a certain class, and then to extract the href
attribute of those urls and put them in an array? (I know how I would do this in javascript but not in ruby.)
Perhaps I do not want to use net/http
?
Upvotes: 0
Views: 735
Reputation: 66837
Sounds to me like Nokogiri would be perfect for you.
require 'nokogiri'
require 'openuri'
doc = Nokogiri::HTML(open('http://stackoverflow.com/index.html'))
doc.xpath('//h3/a[@class="foo"]').each do |element|
# do something with element
end
Upvotes: 4
Reputation: 1755
require 'open-uri'
require 'hpricot'
source = open('stackoverflow.com/index.html').read # get raw html
doc = Hpricot(source) # parse with Hpricot
links = doc.search("//a[@class~='foo_bar']").collect { |a| a[:href] } # search for all links with 'foo_bar' class and then collect array of links
NB: code is not optimized, so read Hpricot documentation if you'd like to improve it ;)
Upvotes: 1
Reputation: 2446
Try searching Parsing HTML / DOM to look for relevant results. I'm sure there are a ton out there.
How to manipulate DOM with Ruby on Rails
Upvotes: 0