Levi Hackwith
Levi Hackwith

Reputation: 9332

Directly Downloading a File From an RSS feed Using Ruby - Handling Redirects

I'm writing a program in Ruby that downloads a file from an RSS feed to my local hard drive. Previously, I'd written this application in Perl and figured a great way to learn Ruby would be to recreate this program using Ruby code.

In the Perl program (which works), I was able to download the original file directly from the server it was hosted on (keeping the original file name) and it worked great. In the Ruby program (which isn't working), I have to sort of "stream" the data from the file I want into a new file that I've created on my hard drive. Unfortunately, this isn't working and the "streamed" data is always coming back empty. My assumption is that there is some sort of redirect that Perl can handle to retrieve the file directly that Ruby cannot.

I'm going to post both programs (they're relatively small) and hope that this helps solve my issue. If you have questions, please let me know. As a side note, I pointed this program at a more static URL (a jpeg) and it downloaded the file just fine. This is why I'm theorizing that some sort of redirect is causing issues.

The Ruby Code (That Doesn't Work)


require 'net/http';
require 'open-uri';
require 'rexml/document';
require 'sqlite3';
# Create new SQLite3 database connection
db_connection = SQLite3::Database.new('fiend.db');
# Make sure I can reference records in the query result by column name instead of index number
db_connection.results_as_hash = true;
# Grab all TV shows from the shows table
query = '
    SELECT
        id,
        name,
        current_season,
        last_episode
    FROM
        shows
    ORDER BY
        name
';
# Run through each record in the result set
db_connection.execute(query) { |show|
    # Pad the current season number with a zero for later user in a search query
    season = '%02d' % show['current_season'].to_s;
    # Calculate the next episode number and pad with a zero
    next_episode = '%02d' % (Integer(show['last_episode']) + 1).to_s;
    # Store the name of the show
    name = show['name'];
    # Generate the URL of the RSS feed that will hold the list of torrents
    feed_url = URI.encode("http://btjunkie.org/rss.xml?query=#{name} S#{season}E#{next_episode}&o=52");
    # Generate a simple string the denotes the show, season and episode number being retrieved
    episode_id = "#{name} S#{season}E#{next_episode}";
    puts "Loading feed for #{name}..";
    # Store the response from the download of the feed
    feed_download_response = Net::HTTP.get_response(URI.parse(feed_url));
    # Store the contents of the response (in this case, XML data)
    xml_data = feed_download_response.body;
    puts "Feed Loaded. Parsing items.."
    # Create a new REXML Document and pass in the XML from the Net::HTTP response
    doc = REXML::Document.new(xml_data);
    # Loop through each  in the feed
    doc.root.each_element('//item') { |item|
        # Find and store the URL of the torrent we wish to download
        torrent_url = item.elements['link'].text + '/download.torrent';
        puts "Downloading #{episode_id} from #{torrent_url}";
        ## This is where crap stops working
        # Open Connection to the host
        Net::HTTP.start(URI.parse(torrent_url).host, 80) { |http|
          # Create a torrent file to dump the data into
          File.open("#{episode_id}.torrent", 'wb') { |torrent_file|
              # Try to grab the torrent data
              data = http.get(torrent_url[19..torrent_url.size], "User-Agent" => "Mozilla/4.0").body;
              # Write the data to the torrent file (the data is always coming back blank)
              torrent_file.write(data);
              # Close the torrent file
              torrent_file.close();
          }

        }
        break;
    }
}

The Perl Code (That Does Work)


use strict;
use XML::Parser;
use LWP::UserAgent;
use HTTP::Status;
use DBI;
my $dbh = DBI->connect("dbi:SQLite:dbname=fiend.db", "", "", { RaiseError => 1, AutoCommit => 1 });
my $userAgent = new LWP::UserAgent; # Create new user agent
$userAgent->agent("Mozilla/4.0"); # Spoof our user agent as Mozilla
$userAgent->timeout(20); # Set timeout limit for request
my $currentTag = ""; # Stores what tag is currently being parsed
my $torrentUrl = ""; # Stores the data found in any  node
my $isDownloaded = 0; # 1 or zero that states whether or not we've downloaded a particular episode
my $shows = $dbh->selectall_arrayref("SELECT id, name, current_season, last_episode FROM shows ORDER BY name");
my $id = 0;
my $name = "";
my $season = 0;
my $last_episode = 0;
foreach my $show (@$shows) { 
    $isDownloaded = 0;
    ($id, $name, $season, $last_episode) = (@$show);
    $season = sprintf("%02d", $season); # Append a zero to the season (e.g. 6 becomes 06)
    $last_episode = sprintf("%02d", ($last_episode + 1)); # Append a zero to the last episode (e.g. 6 becomes 06) and increment it by one
    print("Checking $name S" . $season . "E" . "$last_episode \n"); 
    my $request = new HTTP::Request(GET => "http://btjunkie.org/rss.xml?query=$name S" . $season . "E" . $last_episode . "&o=52"); # Retrieve the torrent feed
    my $rssFeed = $userAgent->request($request);  # Store the feed in a variable for later access
    if($rssFeed->is_success) { # We retrieved the feed
        my $parser = new XML::Parser(); # Make a new instance of XML::Parser
        $parser->setHandlers # Set the functions that will be called when the parser encounters different kinds of data within the XML file.
        (
            Start => \&startHandler, # Handles start tags (e.g. )
            End   => \&endHandler, # Handles end tags (e.g. 
            Char  => \&DataHandler # Handles data inside of start and end tags
        );
        $parser->parsestring($rssFeed->content); # Parse the feed
    }
}

#
# Called every time XML::Parser encounters a start tag
# @param: $parseInstance {object} | Instance of the XML::Parser. Passed automatically when feed is parsed. 
# @param: $element {string} | The name of the XML element being parsed (e.g. "title"). Passed automatically when feed is parsed. 
# @attributes {array} | An array of all of the attributes of $element
# @returns: void
#
sub startHandler {
    my($parseInstance, $element, %attributes) = @_;
    $currentTag = $element;
}
#
# Called every time XML::Parser encounters anything that is not a start or end tag (i.e, all the data in between tags)
# @param: $parseInstance {object} | Instance of the XML::Parser. Passed automatically when feed is parsed. 
# @param: $element {string} | The name of the XML element being parsed (e.g. "title"). Passed automatically when feed is parsed. 
# @attributes {array} | An array of all of the attributes of $element
# @returns: void
#
sub DataHandler {
    my($parseInstance, $element, %attributes) = @_;
    if($currentTag eq "link" && $element ne "\n") {
        $torrentUrl = $element;
    }
}
#
# Called every time XML::Parser encounters an end tag
# @param: $parseInstance {object} | Instance of the XML::Parser. Passed automatically when feed is parsed. 
# @param: $element {string} | The name of the XML element being parsed (e.g. "title"). Passed automatically when feed is parsed. 
# @attributes {array} | An array of all of the attributes of $element
# @returns: void
#
sub endHandler {
    my($parseInstance, $element, %attributes) = @_;
    if($element eq "item" && $isDownloaded == 0) { # We just finished parsing an  element so let's attempt to download a torrent
        print("DOWNLOADING: $torrentUrl" . "/download.torrent \n");
        system("echo.|lwp-download " . $torrentUrl . "/download.torrent"); # We echo the "return " key into the command to force it to skip any file-overwite prompts
        if(unlink("download.torrent.html")) { # We tried to download a 'locked' torrent
            $isDownloaded = 0; # Forces program to download next torrent on list from current show
        }
        else {
            $isDownloaded = 1;
            $dbh->do("UPDATE shows SET last_episode = '$last_episode' WHERE id = '$id'"); # Update DB with new show information
        }   
    }
}

Upvotes: 0

Views: 1063

Answers (2)

jason.rickman
jason.rickman

Reputation: 15201

Yes, the URLs you are retrieving appear to be returning a 302 (redirect). Net::HTTP requires/allows you to handle the redirect yourself. You typically use a recursive techique like AboutRuby mentioned (although this http://www.ruby-forum.com/topic/142745 suggests you should not only look at the 'Location' field but also for META REFRESH in the response).

open-uri will handle redirects for you if you're not interested in the low-level interaction:

require 'open-uri'

File.open("#{episode_id}.torrent", 'wb') {|torrent_file| torrent_file.write open(torrent_url).read}

Upvotes: 1

AboutRuby
AboutRuby

Reputation: 8116

get_response will return a class from the HTTPResponse hierarchy. It's usually HTTPSuccess, but if there's a redirect, it will be HTTPRedirection. A simple recursive method can solve this, that follows redirects. How to handle this correctly is in the docs under the heading "Following Redirection."

Upvotes: 0

Related Questions