Visa De
Visa De

Reputation: 21

newspaper(python) get all cnn news url

for example in this url (https://edition.cnn.com/search/?q=%20news&size=10&from=5540&page=555)

In html file i can find this link(html tag)

<div class="cnn-search__result-thumbnail">         
     <a href="https://www.cnn.com/2018/03/27/asia/north-korea-kim-jong-un-china-visit/index.html">
  <img src="./Search CNN - Videos, Pictures, and News - 
    CNN.com_files/180328104116china-xi-kim-story-body.jpg">
 </a>

but in this code

    cnn_paper = newspaper.build(url, memoize_articles=False)
     for article in cnn_paper.articles:
          print(article.url) 

i can not find news link

https://edition.cnn.com/search/?q=%20news&size=10&from=5540&page=555 https://edition.cnn.com/search/?q=%20news&size=10&from=5550&page=556

get same link

Upvotes: 0

Views: 1006

Answers (2)

Ahmad Masalha
Ahmad Masalha

Reputation: 506

The search results are dynamically displayed from a JSON file from a different request: https://search.api.cnn.io/content?q=news&size=50&from=0

size can be 50 at max.

res = requests.get("https://search.api.cnn.io/content?q=news&size=50&from=0")
links = [x['url'] for x in res.json()['result']]

Upvotes: 1

ASH
ASH

Reputation: 20322

Does this do what you want?

from bs4 import BeautifulSoup
import urllib.request

for numb in ('1', '100'):
    resp = urllib.request.urlopen("https://edition.cnn.com/search/?q=%20news&size=10&from=5540&page=555")
    soup = BeautifulSoup(resp, from_encoding=resp.info().get_param('charset'))

    for link in soup.find_all('a', href=True):
        print(link['href'])

Or, maybe this?

from bs4 import BeautifulSoup
from bs4.dammit import EncodingDetector
import requests

resp = requests.get("https://edition.cnn.com/search/?q=%20news&size=10&from=5540&page=555")
http_encoding = resp.encoding if 'charset' in resp.headers.get('content-type', '').lower() else None
html_encoding = EncodingDetector.find_declared_encoding(resp.content, is_html=True)
encoding = html_encoding or http_encoding
soup = BeautifulSoup(resp.content, from_encoding=encoding)

for link in soup.find_all('a', href=True):
    print(link)

Upvotes: 0

Related Questions