MUK
MUK

Reputation: 411

Scrape data from multiple pages of a website using python

I want to crawl around 500 articles from the site AlJazeera Website and want to collect 4 tags i.e

I have written the script that collects data from home page, but it only collects couple of articles. Other articles are in different categories. How can I iterate through 500 articles. Is there an efficient way to do it.

import bs4
import pandas as pd
from bs4 import BeautifulSoup
import requests
from collections import Counter
page = requests.get('https://www.aljazeera.com/')
soup = BeautifulSoup(page.content,"html.parser")
article = soup.find(id='more-top-stories')
inside_articles= article.find_all(class_='mts-article mts-default-article')
article_title = [inside_articles.find(class_='mts-article-title').get_text() for inside_articles in inside_articles]
article_dec = [inside_articles.find(class_='mts-article-p').get_text() for inside_articles in inside_articles]
tag = [inside_articles.find(class_='mts-category').get_text() for inside_articles in inside_articles]
link = [inside_articles.find(class_='mts-article-title').find('a') for inside_articles in inside_articles]

Upvotes: 0

Views: 351

Answers (1)

Shahzaib Butt
Shahzaib Butt

Reputation: 31

You can use scrapy for this purpose.

import scrapy
import json
class BlogsSpider(scrapy.Spider):
    name = 'blogs'
    start_urls = [
        'https://www.aljazeera.com/news/2020/05/fbi-texas-naval-base-shooting-terrorism-related-200521211619145.html',
    ]

    def parse(self, response):
        for data in response.css('body'):
            current_script = data.xpath("//script[contains(., 'mainEntityOfPage')]/text()").extract_first()
            json_data = json.loads(current_script)
            yield {
                'name': json_data['headline'],
                'author': json_data['author']['name'],
                'url': json_data['mainEntityOfPage'],
                'tags': data.css('div.article-body-tags ul li a::text').getall(),
            }

Save this file to file.py and run it by

$scrapy crawl blogs -o output.json

But configure scrapy structure first.

Upvotes: 1

Related Questions