Reputation: 154
I'm quite new in webscraping field, I previously used a code to extract urls from website containing multiple pages and then save them in a txt file. I would like to apply it to a new website but which has only a single page but with a "show more" button.
Here is the webpage : http://sdg.iisd.org/news/
And here is my code :
import requests
from bs4 import BeautifulSoup
import time
import pandas as pd
links = []
for i in range(#221):
url = 'http://sdg.iisd.org/news/' #+ str(i) <-- for webpage with many pages
response = requests.get(url, headers={'User-agent': 'Mozilla/5.0'})
if response.ok:
print('Page: ' + str(i))
soup = BeautifulSoup(response.text,'lxml')
div = soup.findAll('article')
for article in div:
a = article.find('a')
link = a['href']
links.append('https://sdg.iisd.org/news' + link)
print(len(links))
with open('urls.txt', 'w') as file:
for link in links:
file.write(link + '\n')
Some people suggest to use Selenium but I couldn't find example of a similar application that I have. Do you have any idea of what I could use and change in my code to obtain all links of the page ?
Upvotes: 1
Views: 1263
Reputation: 10809
If you log your browser's network traffic, you can see that pressing the Show more
button makes an XHR Request to http://sdg.iisd.org/wp-admin/admin-ajax.php
via HTTP POST, and the response is HTML. You can copy the POST payload from your browser's dev tools as well. Play around with the pageNumber
and ppp
key-value pairs in the data
payload dictionary to get different articles:
def main():
import requests
from bs4 import BeautifulSoup as Soup
from operator import itemgetter
url = "http://sdg.iisd.org/wp-admin/admin-ajax.php"
data = {
"template": "load_more",
"post_type": "news",
"sdgs": "",
"issues": "",
"globalpartnership": "",
"actors": "",
"actions": "",
"regions": "",
"behaviour": "exact",
"sort_by": "DESC",
"pageNumber": "1",
"ppp": "12",
"action": "more_post_ajax",
"author": ""
}
response = requests.post(url, data=data)
response.raise_for_status()
soup = Soup(response.content, "html.parser")
article_urls = list(map(itemgetter("href"), soup.select("article > a")))
print(article_urls)
return 0
if __name__ == "__main__":
import sys
sys.exit(main())
Output:
['http://sdg.iisd.org/news/wef-event-explores-ways-to-fix-international-trade-system/', 'http://sdg.iisd.org/news/wto-members-resume-negotiations-on-fisheries-subsidies/', 'http://sdg.iisd.org/news/informal-ministerial-highlights-role-of-trade-in-promoting-covid-19-recovery/', 'http://sdg.iisd.org/news/wto-imf-project-uneven-covid-19-recovery-across-and-within-countries/', 'http://sdg.iisd.org/news/53-wto-members-commit-to-ease-restrictions-on-humanitarian-food-aid/', 'http://sdg.iisd.org/news/development-goals-can-work-even-amid-crisis-but-we-need-to-measure-better/', 'http://sdg.iisd.org/news/unctad-partners-launch-tool-to-identify-exchange-traded-funds-with-sdg-alignment/', 'http://sdg.iisd.org/news/tool-helps-measure-quality-of-stakeholder-engagement-in-sdgs/', 'http://sdg.iisd.org/news/unctad-reveals-economic-slowdown-before-covid-19-provides-key-data-on-rcep-agreement/', 'http://sdg.iisd.org/news/unep-report-identifies-top-actions-to-minimize-adverse-impacts-of-pesticides-fertilizers/', 'http://sdg.iisd.org/news/regions-to-hold-sustainable-development-forums-ahead-of-2021-hlpf/', 'http://sdg.iisd.org/news/ndc-partnership-reflects-on-milestone-year-for-climate-ambition/']
>>>
Upvotes: 1