Dave
Dave

Reputation: 1925

download a file with mechanize

I have a browser instance that has opened a page . I would like to download and save down all the links ( they are PDFs ). Does someone know how to do it ?

Thx

Upvotes: 4

Views: 2362

Answers (2)

cetver
cetver

Reputation: 11829

import urllib, urllib2,cookielib, re
#http://www.crummy.com/software/BeautifulSoup/ - required
from BeautifulSoup import BeautifulSoup

HOST = 'https://www.adobe.com/'

cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))

req = opener.open( HOST + 'pdf' )
responce = req.read()

soup = BeautifulSoup( responce )
pdfs = soup.findAll(name = 'a', attrs = { 'href': re.compile('\.pdf') })
for pdf in pdfs:
    if 'https://' not in pdf['href']:
        url = HOST + pdf['href']
    else:
        url = pdf['href']
    try:
        #http://docs.python.org/library/urllib.html#urllib.urlretrieve
        urllib.urlretrieve(url)
    except Exception, e:
        print 'cannot obtain url %s' % ( url, )
        print 'from href %s' % ( pdf['href'], )
        print e
    else:
        print 'downloaded file'
        print url

Upvotes: 3

David
David

Reputation: 18271

May not be the answer you're looking for but I've used lxml and the requests libraries together for automated anchor fetching:

Relevant lxml examples http://lxml.de/lxmlhtml.html#examples (replace urllib with requests )

And the requests library homepage http://docs.python-requests.org/en/latest/index.html

It's not as compact as mechanize but does offer more control.

Upvotes: 1

Related Questions