mazix
mazix

Reputation: 2604

Parsing web page in python using Beautiful Soup

I have some troubles with getting the data from the website. The website source is here:

view-source:http://release24.pl/wpis/23714/%22La+mer+a+boire%22+%282011%29+FRENCH.DVDRip.XviD-AYMO

there's sth like this:

INFORMACJE O FILMIE

Tytuł............................................: La mer à boire

Ocena.............................................: IMDB - 6.3/10 (24)

Produkcja.........................................: Francja

Gatunek...........................................: Dramat

Czas trwania......................................: 98 min.

Premiera..........................................: 22.02.2012 - Świat

Reżyseria........................................: Jacques Maillot

Scenariusz........................................: Pierre Chosson, Jacques Maillot

Aktorzy...........................................: Daniel Auteuil, Maud Wyler, Yann Trégouët, Alain Beigel

And I want to get the data from this website to have a Python list of strings:

[[Tytuł, "La mer à boire"]
[Ocena, "IMDB - 6.3/10 (24)"]
[Produkcja, Francja]
[Gatunek, Dramat]
[Czas trwania, 98 min.]
[Premiera, "22.02.2012 - Świat"]
[Reżyseria, "Jacques Maillot"]
[Scenariusz, "Pierre Chosson, Jacques Maillot"]
[Aktorzy, "Daniel Auteuil, Maud Wyler, Yann Trégouët, Alain Beigel"]]

I wrote some code using BeautifulSoup but I cant go any further, I just don't know what to get the rest from the website source and how to convert is to string ... Please, help!

My code:

    # -*- coding: utf-8 -*-
#!/usr/bin/env python

import urllib2
from bs4 import BeautifulSoup

try :
    web_page = urllib2.urlopen("http://release24.pl/wpis/23714/%22La+mer+a+boire%22+%282011%29+FRENCH.DVDRip.XviD-AYMO").read()
    soup = BeautifulSoup(web_page)
    c = soup.find('span', {'class':'vi'}).contents
    print(c)
except urllib2.HTTPError :
    print("HTTPERROR!")
except urllib2.URLError :
    print("URLERROR!")

Upvotes: 8

Views: 10990

Answers (4)

hidan
hidan

Reputation: 1

page = requests.get('https://habr.com/ru/search/page1/? q=Ютуб&target_type=posts&order=relevance').text
page_soup = BeautifulSoup(page, 'html.parser')
count_pages = int(page_soup.find_all('div', 'tm-pagination__page-group') [-1].text.split()[0])
hrefs = []
for i in range(1, count_pages + 1):
    print(i)
    page = requests.get(f'https://habr.com/ru/search/page{i}/? 
    q=Новости&target_type=posts&order=relevance').text
    page_s = BeautifulSoup(page, 'html.parser')
    links = page_s.find_all('article', 'tm-articles-list__item')
    for idx, link in enumerate(links):
        hrefs.append(f'https://habr.com/ru/news/{link["id"]}/')
    
texts = [''] * 1000   
for ind, href in enumerate(hrefs):
    print(ind)
    pagex = requests.get(href).text
    page_su = BeautifulSoup(pagex, 'html.parser')
    try:
        text = page_su.find_all("div", "article-formatted-body article- 
formatted-body article-formatted-body_version-1")[0].text
        texts[ind] = text
    except:
        ...

Upvotes: -1

Said Py
Said Py

Reputation: 11

Here is the clean code:

import requests
from bs4 import BeautifulSoup

try:
    # Send an HTTP GET request to the URL
    url = "http://release24.pl/wpis/23714/%22La+mer+a+boire%22+%282011%29+FRENCH.DVDRip.XviD-AYMO"
    response = requests.get(url)

    # Check if the request was successful (status code 200)
    if response.status_code == 200:
        soup = BeautifulSoup(response.content, 'html.parser')

        # Find the span elements with class 'vi'
        vi_elements = soup.find_all('span', class_='vi')

        # Initialize a list to store the data
        data_list = []

        # Iterate through the 'vi' elements and extract the information
        for vi_element in vi_elements:
            # Extract the label and value as strings
            label = vi_element.find_previous('strong').get_text(strip=True)
            value = vi_element.get_text(strip=True)
            
            # Append the label and value as a list to the data_list
            data_list.append([label, value])

        # Print the data_list
        for item in data_list:
            print(item)
    else:
        print('Failed to retrieve the webpage. Status code:', response.status_code)

except requests.exceptions.RequestException as e:
    print('Error:', e)

This code sends an HTTP GET request to the specified URL, parses the HTML content, finds the vi elements, extracts the label and value, and stores them in the data_list. Finally, it prints the data list, which should resemble the desired format.

Reference

Upvotes: 0

brandizzi
brandizzi

Reputation: 27100

The secret of using BeautifulSoup is to find the hidden patterns of your HTML document. For example, your loop

for ul in soup.findAll('p') :
    print(ul)

is in the right direction, but it will return all paragraphs, not only the ones you are looking for. The paragraphs you are looking for, however, have the helpful property of having a class i. Inside these paragraphs one can find two spans, one with the class i and another with the class vi. We are lucky because those spans contains the data you are looking for:

<p class="i">
    <span class="i">Tytuł............................................</span>
    <span class="vi">: La mer à boire</span>
</p>

So, first get all the paragraphs with the given class:

>>> ps = soup.findAll('p', {'class': 'i'})
>>> ps
[<p class="i"><span class="i">Tytuł... <LOTS OF STUFF> ...pan></p>]

Now, using list comprehensions, we can generate a list of pairs, where each pair contains the first and the second span from the paragraph:

>>> spans = [(p.find('span', {'class': 'i'}), p.find('span', {'class': 'vi'})) for p in ps]
>>> spans
[(<span class="i">Tyt... ...</span>, <span class="vi">: La mer à boire</span>), 
 (<span class="i">Ocena... ...</span>, <span class="vi">: IMDB - 6.3/10 (24)</span>),
 (<span class="i">Produkcja.. ...</span>, <span class="vi">: Francja</span>),
 # and so on
]

Now that we have the spans, we can get the texts from them:

>>> texts = [(span_i.text, span_vi.text) for span_i, span_vi in spans]
>>> texts
[(u'Tytu\u0142............................................', u': La mer \xe0 boire'),
 (u'Ocena.............................................', u': IMDB - 6.3/10 (24)'),
 (u'Produkcja.........................................', u': Francja'), 
  # and so on
]

Those texts are not ok still, but it is easy to correct them. To remove the dots from the first one, we can use rstrip():

>>> u'Produkcja.........................................'.rstrip('.')
u'Produkcja'

The : string can be removed with lstrip():

>>> u': Francja'.lstrip(': ')
u'Francja'

To apply it to all content, we just need another list comprehension:

>>> result = [(text_i.rstrip('.'), text_vi.replace(': ', '')) for text_i, text_vi in texts]
>>> result
[(u'Tytu\u0142', u'La mer \xe0 boire'),
 (u'Ocena', u'IMDB - 6.3/10 (24)'),
 (u'Produkcja', u'Francja'),
 (u'Gatunek', u'Dramat'),
 (u'Czas trwania', u'98 min.'),
 (u'Premiera', u'22.02.2012 - \u015awiat'),
 (u'Re\u017cyseria', u'Jacques Maillot'),
 (u'Scenariusz', u'Pierre Chosson, Jacques Maillot'),
 (u'Aktorzy', u'Daniel Auteuil, Maud Wyler, Yann Tr&eacute;gou&euml;t, Alain Beigel'),
 (u'Wi\u0119cej na', u':'),
 (u'Trailer', u':Obejrzyj zwiastun')]

And that is it. I hope this step-by-step example can make the use of BeautifulSoup clearer for you.

Upvotes: 14

mwoods
mwoods

Reputation: 317

This will get you the List You want you'll have to write some code to get rid of the trailing '....'s and to convert the character strings.

    import urllib2
    from bs4 import BeautifulSoup

     try :
 web_page = urllib2.urlopen("http://release24.pl/wpis/23714/%22La+mer+a+boire%22+%282011%29+FRENCH.DVDRip.XviD-AYMO").read()
soup = BeautifulSoup(web_page)
LIST = []
for p in soup.findAll('p'):
    s = p.find('span',{ "class" : 'i' })
    t = p.find('span',{ "class" : 'vi' })
    if s and t:
        p_list = [s.string,t.string]
        LIST.append(p_list)

except urllib2.HTTPError : print("HTTPERROR!") except urllib2.URLError : print("URLERROR!")

Upvotes: 0

Related Questions