Reputation: 51
I have a .html file full of links, I would like to extract the domains without the http:// (so just the hostname portion of the link, e.g blah.com) list them and remove duplicates.
This is what I have come up with so far - i think the issue is the way I am trying to pass the $tree data
#!/usr/local/bin/perl -w
use HTML::TreeBuilder 5 -weak; # Ensure weak references in use
use URI;
foreach my $file_name (@ARGV) {
my $tree = HTML::TreeBuilder->new; # empty tree
$tree->parse_file($file_name);
my $u1 = URI->new($tree);
print "host: ", $u1->host, "\n";
print "Hey, here's a dump of the parse tree of $file_name:\n";
# Now that we're done with it, we must destroy it.
# $tree = $tree->delete; # Not required with weak references
}
Upvotes: 5
Views: 291
Reputation: 6204
Here's another option:
use strict;
use warnings;
use Regexp::Common qw/URI/;
use URI;
my %hosts;
while (<>) {
$hosts{ URI->new($1)->host }++ while /$RE{URI}{-keep}/g;
}
print "$_\n" for keys %hosts;
Command-line usage: perl script.pl htmlFile1 [htmlFile2 ...] [>outFile]
You can send the script multiple html files. The last, optional parameter directs output to a file.
Partial output using cnn.com home page as html source:
www.huffingtonpost.com
a.visualrevenue.com
earlystart.blogs.cnn.com
reliablesources.blogs.cnn.com
insideman.blogs.cnn.com
cnnphotos.blogs.cnn.com
cnnpresents.blogs.cnn.com
i.cdn.turner.com
www.stylelist.com
js.revsci.net
z.cdn.turner.com
www.cnn.com
...
Hope this helps!
Upvotes: 2
Reputation:
Personally, I'd go with Mojo::DOM for this, and using URI module to extract domains: `
use Mojo::DOM;
use URI;
use List::AllUtils qw/uniq/;
my @domains = sort +uniq
map eval { URI->new( $_->{href} )->authority } // (),
Mojo::DOM->new( $html_code )->find("a[href]")->each;
(P.S. the exception handing on ->authority
is because some URI's will croak here; like mailto:s)
Upvotes: 4