Reputation: 79420
If I have say 20 HTML pages and I want to extract out the shared/similar portions of the documents, what are some efficient ways to do that?
So say for StackOverflow, comparing 10 pages I'd find that the top bar and the main menu bar are the same across each page, so I could extract them out.
It seems like I'd need either a diff program or some complex regexps, but assume I don't have any knowledge of the page/text/html structure beforehand.
Is this possible?
Upvotes: 2
Views: 985
Reputation: 95402
You should consider a clone detector such as CloneDR. Good ones compare the structure of thousands of files at once regardless of the formatting, and will tell you what the files have as common elements and how those common elements vary.
CloneDR has been applied to many programming langauges. Its foundation, the DMS Software Reengeering Toolkit, already handles (dirty) HTML, so it would be pretty easy to build an HMTL CloneDR.
Upvotes: 1
Reputation: 13600
You don't need any complex regexps; just a simple diff analyzer will do. Just do an (Enumerable) injection, keeping only similar parts as your memo.
Here are some in Ruby:
Hope this helps!
Upvotes: 0