Reputation: 1615
I am writing an utility which should hit the URL of a dynamic page, retrieve the content, search for a specific div tag in various nested div tags and grab the content.
Mainly, I am looking for some Java code/library. JavaScript or some JavaScript-based library would also work for me.
I shortlisted following -> JSoup, Jerry, JTidy(last updated in 2009-12-01). Which one is best performance wise?
Edit: Rephrased the question. Added shortlisted lib.
Upvotes: 0
Views: 780
Reputation: 3525
If you like jQuery's simple syntax, you can try Jerry :
Jerry is a jQuery in Java. Jerry is a fast and concise Java Library that simplifies HTML document parsing, traversing and manipulating.
Jerry is designed to change the way that you parse HTML content.
Syntax seems to be very simple. It should solve your problem in maximum 3 lines of code.
Upvotes: 2
Reputation: 1089
If you want to scrape a page and parse it I recommend using node with jsdom.
install nodeJS (assuming linux):
sudo apt-get install git
cd ~
git clone git://github.com/joyent/node
cd node
git checkout v0.6
mkdir ~/.local # If it doesn't already exist
./configure --prefix=~/.local
make
make install
There is also a windows installer: http://nodejs.org/dist/v0.6.6/node-v0.6.6.msi
install jsdom:
$ npm install jsdom
Run this script modified with your url and the relevant selectors:
var jsdom = require('jsdom');
jsdom.env({
html: 'url',
done: function(errors, window) {
console.log(window.document.getElementById('foo').textContent;
}
});
Upvotes: 2
Reputation: 18099
If what you're after is a selector engine, then Sizzle is your best bet. Its the engine used by jQuery.
Upvotes: 1
Reputation: 8858
give the unique id for each div and get by using document.getElementById(id)
Upvotes: 0