Fincher
Fincher

Reputation: 11

Scraping urls from multiple webpages

I'm trying to extract URLs from multiple webpages (in this case 2) but for some reason, my output is a duplicate list of URLs extracted from the first page. What am I doing wrong?

My code:

# URLs of books in scope
urls = []
for pn in range(2):
    baseUrl = 'https://www.goodreads.com'
    path = '/shelf/show/bestsellers?page='+str(pn+1)
    page = requests.get(baseUrl + path).text
    print(baseUrl+path)
    soup = BeautifulSoup(page, "html.parser")
    for link in soup.findAll('a',attrs={'class':"leftAlignedImage"}):
        if link['href'].startswith('/author/show/'):
            pass
        else:
            u=baseUrl+link['href']
            urls.append(u)
for u in urls:
    print(u)

Output:

https://www.goodreads.com/shelf/show/bestsellers?page=1
https://www.goodreads.com/shelf/show/bestsellers?page=2
https://www.goodreads.com/book/show/5060378-the-girl-who-played-with-fire
https://www.goodreads.com/book/show/968.The_Da_Vinci_Code
https://www.goodreads.com/book/show/4667024-the-help
https://www.goodreads.com/book/show/2429135.The_Girl_with_the_Dragon_Tattoo
https://www.goodreads.com/book/show/3.Harry_Potter_and_the_Sorcerer_s_Stone
.
.
.
https://www.goodreads.com/book/show/4588.Extremely_Loud_Incredibly_Close
https://www.goodreads.com/book/show/36809135-where-the-crawdads-sing
.
.
.
https://www.goodreads.com/book/show/4588.Extremely_Loud_Incredibly_Close
https://www.goodreads.com/book/show/36809135-where-the-crawdads-sing

Upvotes: 0

Views: 81

Answers (1)

Aziz
Aziz

Reputation: 20765

You are getting duplicate URLs because both times you are loading the same page. That website shows only the first page of best-sellers if you are not logged in, even if you set page=2.

To fix this, you will have to either modify your code to login first before loading the pages, or to pass cookies that you have to import from a logged-in browser.

Upvotes: 1

Related Questions