Mujtaba
Mujtaba

Reputation: 260

How to scrape multiple pages when page numbers are unordered

I'm trying to scrape list of words from a website using BeautifulSoup. Scraping the first page is easy but to get all the pages I have to get page numbers (strings exactly) for each page which is quite hard for me because they don't start from traditional {1-100} or {a-z}, they are different for each page.

For example, this is the page where all the links are stored for the rest of the pages in /a/ category. Normally they would be like a/1,a/2,a/3 but in this case they are:

https://dictionary.cambridge.org/browse/english/a/a
https://dictionary.cambridge.org/browse/english/a/a-conflict-of-interest
https://dictionary.cambridge.org/browse/english/a/a-hard-tough-row-to-hoe
and so on...all the way to /english/z/{}

My Code:

import requests
from bs4 import BeautifulSoup as bs

url = 'https://dictionary.cambridge.org/browse/english/a/a/'
head = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.111 Safari/537.36'
# regex = 'idiom$'

with open('output.txt', 'w', encoding="utf-8") as f_out:

    soup = bs(requests.get(url,headers={'User-Agent': head}).content, 'html.parser')
    div = soup.find('div', attrs={'class', 'hdf ff-50 lmt-15'})
    span = div.find_all('a')

    for text in span:

        text_str = text.text.strip()
        print(text_str)
        print('{}'.format(text_str), file=f_out)

It gets the text as expected but after that I have no idea how to parse the next pages.

Upvotes: 1

Views: 289

Answers (1)

baduker
baduker

Reputation: 20042

You could just loop through the alphabet, grab all href attributes, clip the last part off of it (that's your word or expression) and then save it to a file.

Here's how:

import string

import requests
from bs4 import BeautifulSoup

headers = {
    "user-agent": 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.111 Safari/537.36',
}
letters = string.ascii_lowercase
main_url = "https://dictionary.cambridge.org/browse/english/"

for letter in letters:
    print(f"Fetching words for letter {letter.upper()}...")
    page = requests.get(f"{main_url}{letter}", headers=headers).content
    soup = BeautifulSoup(page, "html.parser").find_all("a", {"class": "dil tcbd"})
    with open(f"{letter}_words.txt", "w") as output:
        output.writelines(
            "\n".join(a["href"].split("/")[-2] for a in soup[1:]) + "\n"
        )

Output: a file for each letter, for example, letter a.

a-conflict-of-interest
a-hard-tough-row-to-hoe
a-meeting-of-minds
a-pretty-fine-kettle-of-fish
a-thing-of-the-past
ab-initio
abduction
abo
abreast
absolute-motion
absurdity
accent
accidental-death-benefit
account-for-sth
acct
acetylcholinesterase
ackee
acrobatics
actionable
actuarial
adapting
adduce
adjective
administration-order
adoration
adumbrated
advertised
aerie
affect
affronting
afters
agender
agit-pop
...

Upvotes: 1

Related Questions