Raghav Goyal
Raghav Goyal

Reputation: 51

Trying to Extract Weblinks BeautifulSoup

I am trying to extract all the PDF links on this page.

My code is :

import requests
from bs4 import BeautifulSoup
from pprint import pprint

base_url = 'https://usda.library.cornell.edu'

url = 'https://usda.library.cornell.edu/concern/publications/3t945q76s?locale=en#release-items'

soup = BeautifulSoup(requests.get(url).pdf, 'html.parser')
b = []

page = 1
while True:
    pdf_urls = [a["href"] for a in soup.select('#release-items a[href$=".pdf"]')]
    pprint(pdf_urls)
    b.append(pdf_urls)

    m = soup.select_one('a[rel="next"][href]')
    if m and m['href'] != '#':
        soup = BeautifulSoup(requests.get(base_url + m['href']).pdf, 'html.parser')
    else:
        break

I get the following error:

AttributeError: 'Response' object has no attribute 'pdf'

A similar code for text files work. Where am I going wrong?

Upvotes: 0

Views: 50

Answers (2)

UWTD TV
UWTD TV

Reputation: 910

A little change in your code will maybe make it:

import requests
from bs4 import BeautifulSoup
from pprint import pprint

base_url = 'https://usda.library.cornell.edu'

url = 'https://usda.library.cornell.edu/concern/publications/3t945q76s?locale=en#release-items'

soup = BeautifulSoup(requests.get(url).text, 'html.parser')
b = []

page = 1
while True:
    pdf_urls = [a["href"] for a in soup.select('#release-items a[href$=".pdf"]')]
    pprint(pdf_urls)
    b.append(pdf_urls)

    m = soup.select_one('a[rel="next"][href]')
    if m and m['href'] != '#':
        soup = BeautifulSoup(requests.get(base_url + m['href']).text, 'html.parser')
    else:
        break

This:

soup = BeautifulSoup(requests.get(url).pdf, 'html.parser')

to:

soup = BeautifulSoup(requests.get(url).text, 'html.parser')

and this:

soup = BeautifulSoup(requests.get(base_url + m['href']).pdf, 'html.parser')

to this:

soup = BeautifulSoup(requests.get(base_url + m['href']).text, 'html.parser')

Output:

['https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/sb397x16q/b8516938c/latest.pdf',
 'https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/g158c396h/8910kd95z/latest.pdf',
 'https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/w6634p60m/2v23wd923/latest.pdf',
 'https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/q237jb60d/8910kc45j/latest.pdf',
 'https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/02871d57q/tx31r242v/latest.pdf',
 'https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/pz50hc74s/pz50hc752/latest.pdf',
 'https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/79408c82d/jw827v53v/latest.pdf',...

And so on...

Upvotes: 1

user13244731
user13244731

Reputation:

I get the following error:

AttributeError: 'Response' object has no attribute 'pdf'

The method resquests.get() always will return a response object:

print(requests.get("https://stackoverflow.com/"))

will show:

<Response [200]>

If you check the possible attributes with dir() function, you will see that this response object has not a pdf attribute:

['__attrs__', '__bool__', '__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__enter__', '__eq__', '__exit__', '__format__', '__ge__', '__getattribute__', '__getstate__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__iter__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__nonzero__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setstate__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_content', '_content_consumed', '_next', 'apparent_encoding', 'close', 'connection', 'content', 'cookies', 'elapsed', 'encoding', 'headers', 'history', 'is_permanent_redirect', 'is_redirect', 'iter_content', 'iter_lines', 'json', 'links', 'next', 'ok', 'raise_for_status', 'raw', 'reason', 'request', 'status_code', 'text', 'url']

You need to use requests.get(url).content to make the soup:

soup = BeautifulSoup(requests.get(url).content,'html.parser')

I am trying to extract all the PDF links on this page.

Checking the HTML body, you will see that all the files has a "file_set" class. You can get the "href" directly of this classes with generator expresion

pdf_urls = [x.a["href"] for x in soup.find_all(class_ = "file_set")]

Printing you will get all the pdf links: print(pdf_urls)

['https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/sb397x16q/b8516938c/latest.pdf', 'https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/g158c396h/8910kd95z/latest.pdf', 'https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/w6634p60m/2v23wd923/latest.pdf', 'https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/q237jb60d/8910kc45j/latest.pdf', 'https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/02871d57q/tx31r242v/latest.pdf', 'https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/pz50hc74s/pz50hc752/latest.pdf', 'https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/79408c82d/jw827v53v/latest.pdf', 'https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/1544c4419/6108vs89v/latest.pdf', 'https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/k930cb595/8910k788h/latest.pdf', 'https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/st74d522v/qb98mv97t/latest.pdf', 'https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/sb397x16q/b8516938c/latest.pdf']

Upvotes: 1

Related Questions