Arunanand T A
Arunanand T A

Reputation: 1

Website Downloader using Python

I am trying to create a website downloader using python. I have the code for:

  1. Finding all URLs from a page

  2. Downloading a given URL

What I have to do is to recursively download a page, and if there's any other link in that page, I need to download them also. I tried combining the above two functions, but recursion thing doesn't work.

The codes are given below:

1)

*from sgmllib import SGMLParser
class URLLister(SGMLParser):
    def reset(self):
        SGMLParser.reset(self)
        self.urls = []
    def start_a(self, attrs):
        href = [v for k, v in attrs if k=='href']
        if href:
            self.urls.extend(href)
if __name__ == "__main__":
    import urllib
    wanted_url=raw_input("Enter the URL: ")
    usock = urllib.urlopen(wanted_url)
    parser = URLLister()
    parser.feed(usock.read())
    parser.close()
    usock.close()
    for url in parser.urls: download(url)*

2) where download(url) function is defined as follows:

*def download(url):
    import urllib
    webFile = urllib.urlopen(url)
    localFile = open(url.split('/')[-1], 'w')
    localFile.write(webFile.read())
    webFile.close()
    localFile.close()
    a=raw_input("Enter the URL")
    download(a)
    print "Done"*

Kindly help me on how to combine these two codes to "recursively" download the new links on a webpage that's being downloaded.

Upvotes: 0

Views: 3413

Answers (3)

Emil Stenström
Emil Stenström

Reputation: 14086

Generally, the idea is this:

def get_links_recursive(document, current_depth, max_depth):
    links = document.get_links()
    for link in links:
        downloaded = link.download()
        if current_depth < max_depth:
            get_links_recursive(downloaded, depth-1, max_depth)

Call get_links_recursive(document, 0, 3) to get things started.

Upvotes: 1

spicavigo
spicavigo

Reputation: 4224

done_url = []
def download(url):
    if url in done_url:return
    ...download url code...
    done_url.append(url)
    urls = sone_function_to_fetch_urls_from_this_page()
    for url in urls:download(url)

This is a very sad/bad code. For example you will need to check if the url is within the domain you want to crawl or not. However, you asked for recursive.

Be mindful of the recursion depth.

There are just so many things wrong with my solution. :P

You must try some crawling library like Scrapy or something.

Upvotes: 2

Acorn
Acorn

Reputation: 50497

You may want to look into the Scrapy library.

It would make a task like this pretty trivial, and allow you to download multiple pages concurrently.

Upvotes: 2

Related Questions