icomefromchaos
icomefromchaos

Reputation: 245

Beautifulsoup download all .zip files from Google Patent Search

What I am trying to do is use Beautifulsoup to download every zip file from the Google Patent archive. Below is the code that i've written thus far. But it seems that I am having troubles getting the files to download into a directory on my desktop. Any help would be greatly appreciated

from bs4 import BeautifulSoup 
import urllib2
import re
import pandas as pd

url = 'http://www.google.com/googlebooks/uspto-patents-grants.html'

site = urllib2.urlopen(url)
html = site.read()
soup = BeautifulSoup(html)
soup.prettify()

path = open('/Users/username/Desktop/', "wb")

for name in soup.findAll('a', href=True):
    print name['href']
    linkpath = name['href']
    rq = urllib2.request(linkpath)
    res = urllib2.urlope

The result that I am supposed to get, is that all of the zip files are supposed to download into a specific dir. Instead, I am getting the following error:

> #2015 --------------------------------------------------------------------------- AttributeError Traceback (most recent call last)
> <ipython-input-13-874f34e07473> in <module>() 17 print name['href'] 18
> linkpath = name['href'] ---> 19 rq = urllib2.request(namep) 20 res =
> urllib2.urlopen(rq) 21 path.write(res.read()) AttributeError: 'module'
> object has no attribute 'request' –

Upvotes: 1

Views: 3180

Answers (2)

JohnH
JohnH

Reputation: 2721

In addition to using a non-existent request entity from urllib2, you don't output to a file correctly - you can't just open the directory, you have to open each file for output separately.

Also, the 'Requests' package has a much nicer interface than urllib2. I recommend installing it.

Note, that, today anyway, the first .zip is 5.7Gb so streaming to a file is essential.

Really, you want something more like this:

from BeautifulSoup import BeautifulSoup
import requests

# point to output directory
outpath = 'D:/patent_zips/'
url = 'http://www.google.com/googlebooks/uspto-patents-grants.html'
mbyte=1024*1024

print 'Reading: ', url
html = requests.get(url).text
soup = BeautifulSoup(html)

print 'Processing: ', url
for name in soup.findAll('a', href=True):
    zipurl = name['href']
    if( zipurl.endswith('.zip') ):
        outfname = outpath + zipurl.split('/')[-1]
        r = requests.get(zipurl, stream=True)
        if( r.status_code == requests.codes.ok ) :
            fsize = int(r.headers['content-length'])
            print 'Downloading %s (%sMb)' % ( outfname, fsize/mbyte )
            with open(outfname, 'wb') as fd:
                for chunk in r.iter_content(chunk_size=1024): # chuck size can be larger
                    if chunk: # ignore keep-alive requests
                        fd.write(chunk)
                fd.close()

Upvotes: 2

Dan Cornilescu
Dan Cornilescu

Reputation: 39834

This is your problem:

rq = urllib2.request(linkpath)

urllib2 is a module and it has no request entity/attribute in it.

I see a Request class in urllib2, but I'm unsure if that's what you intended to actually use...

Upvotes: 1

Related Questions