Maxyie
Maxyie

Reputation: 891

how do i parse xml webpage in perl

hello Currently i am able to parse the xml file if it is saved in my folder from the webpage.

use strict;
use warnings;
use Data::Dumper;
use XML::Simple;

my $parser = new XML::Simple;
my $data = $parser->XMLin("config.xml");
print Dumper($data);

But it does not work if i am trying to parse it from the website.

use strict;
use warnings;
use Data::Dumper;
use XML::Simple;

my $parser = new XML::Simple;
my $data = $parser->XMLin("http://website/computers/computers_main/config.xml");
print Dumper($data);

it gives me following error "File does not exist: http://website/computers/computers_main/config.xml at test.pl line 12"

How do I parse multiple xml files from the webpage? i have to grab multiple xml form the websites and parse it. can someone please help me with this?

Upvotes: 3

Views: 8202

Answers (3)

iCanHasFay
iCanHasFay

Reputation: 670

Super Edit: This method will require WWW::Mechanize but it will allow you to login to your website then get the xml page. You will have to change a few things which are found in the comments. Hope this helps.

use strict;
use warnings;
use Data::Dumper;
use XML::Simple;
use WWW::Mechanize;

# Create a new instance of Mechanize
$bot = WWW::Mechanize->new();
# Create a cookie jar for the login credentials
$bot->cookie_jar(
        HTTP::Cookies->new(
            file           => "cookies.txt",
            autosave       => 1,
            ignore_discard => 1,
    )
);
# Connect to the login page
$response = $bot->get( 'http://www.thePageYouLoginTo.com' );
# Get the login form
$bot->form_number(1);
# Enter the login credentials.
# You're going to have to change the login and 
# pass(on the left) to match with the name of the form you're logging
# into(Found in the source of the website). Then you can put your 
# respective credentials on the right.
$bot->field( login => 'thisIsWhereYourLoginInfoGoes' );
$bot->field( pass => 'thisIsWhereYourPasswordInfoGoes' );
$response =$bot->click();
# Get the xml page
$response = $bot->get( 'http://website/computers/computers_main/config.xml' );
my $content = $response->decoded_content();
my $parser = new XML::Simple;
my $data = $parser->XMLin($content);
print Dumper($data);

Give this a go. Uses LWP::Simple as answered above. It just connects to the page and grabs the content of that page (xml file) and runs in through XMLin. Edit: added simple error checking at the get $url line. Edit2: Keeping the code here because it should work if a login is not required.

use strict;
use warnings;
use Data::Dumper;
use XML::Simple;
use LWP::Simple;

my $parser = new XML::Simple;

my $url = 'http://website/computers/computers_main/config.xml';
my $content = get $url or die "Unable to get $url\n";
my $data = $parser->XMLin($content);

print Dumper($data);

Upvotes: 2

David W.
David W.

Reputation: 107090

Read the documentation for XML::Simple. Notice that the XMLin method can take a file handle, a string, and even an IO::Handle object. What it can't take is a URL via HTTP.

Use the Perl module LWP::Simple to fetch the XML file you need and pass that through to XMLin.

You'll have to download and install LWP::Simple via using cpan, as you did before for XML::Simple.

Upvotes: 3

rpg
rpg

Reputation: 1652

If you don't have any specific reason to stick with XML::Simple, then use some other parser like XML::Twig, XML::LibXML which provides an inbuilt feature to parse the XML available through web.

Here is the simple code for the same using XML::Twig

use strict;
use warnings;
use XML::Twig;
use LWP::Simple;

my $url = 'http://website/computers/computers_main/config.xml';
my $twig= XML::Twig->new();
$twig->parse( LWP::Simple::get( $url ));

As said, XML::Simple does not have such in-built feature.

Upvotes: 1

Related Questions