Reputation: 942
I'm trying to write a monitoring script in Perl which should check a list of URLs. I am using the LWP::UserAgent
, HTTP::Response
and Time::HiRes
modules.
Here is a my code:
use strict;
use warnings;
use LWP::UserAgent;
use HTTP::Response;
use Time::HiRes qw( gettimeofday );
while (1) {
my $start = gettimeofday();
my $ua = LWP::UserAgent->new();
$ua->agent('lb-healthcheck.pl/0.1');
$ua->timeout(10);
# download the tile locally
my $response = $ua->get("myurl");
my $content = $response->content;
my $end = gettimeofday();
print "$start - $end = ".(($end-$start)*1000)."\n";
}
Running the script manually without the while loop in place I get on average about 70ms response time, but With the while loop in place I get about 5ms response time which is unreal.
Does LWP::UserAgent
do any caching? If yes is it possible to disable it and how? If not what am I doing wrong?
Upvotes: 1
Views: 1811
Reputation: 5469
LWP doesn't do caching, but the OS will probably cache data like the result of the DNS lookups, so those will only take time on the first lookup and after the OS cache expires.
Upvotes: 0
Reputation: 126762
LWP
doesn't do any caching of its own unless you tell it to, but there is a lot between LWP and the host site. Are you working through a proxy, for example? If so then it will be caching pages it fetches in case they are required a second time. There are also many other caches in the cloud that may be speeding up your response, but a time of 7ms implies a reasonably local cache.
You should also be using the tv_interval
subroutine from Time::HiRes
to calculate the intervals. It expects you to store the result-pairs from gettimeofday
in arrays, and will calculate the difference between two of these pairs. You code would look like this
use Time::HiRes qw( gettimeofday tv_interval );
while () {
my $start = [ gettimeofday() ];
# download the tile locally
my $end = [ gettimeofday() ];
print tv_interval($start, $end), "\n";
}
For what it's worth, for an ordinary national web site I get around 500ms for the initial fetch, followed by roughly 300ms for subsequent fetches. So some caching is going on, but with far less impact than you are reporting.
Upvotes: 1
Reputation: 1711
It looks that your're not estimating the elapsed time correctly. gettimeof day returns an array containing both: seconds and microseconds elapsed, so in order to calculate the time elapsed you should need to do some conversions. Something like:
my ($init_sec, $init_usec) = gettimeofday
# SOME CODE HERE
my ($stop_sec, $stop_usec) = gettimeofday
if ( $init_usec > $stop_usec ) {
$stop_usec += 1_000_000;
$stop_sec--;
}
#convert seconds into mseconds
my $tsec = ( $stop_sec - $init_sec ) * 1_000;
# convert usecs into msecs
my $tusec = ( $stop_usec - $init_usec) / 1_000;
# elapsed time is $tsec + $tusec
Upvotes: 0
Reputation: 8895
Try setting the conn_cache
with a LWP::ConnCache object that is set to drop all connections (see its total_capacity
subroutine for e.g)
Upvotes: 0