Reputation: 1728
I'm trying to extract the HTML code of a table from a webpage using BeautifulSoup.
<table class="facts_label" id="facts_table">...</table>
I would like to know why the code bellow works with the "html.parser"
and prints back none
if I change "html.parser"
for "lxml"
.
#! /usr/bin/python
from bs4 import BeautifulSoup
from urllib import urlopen
webpage = urlopen('http://www.thewebpage.com')
soup=BeautifulSoup(webpage, "html.parser")
table = soup.find('table', {'class' : 'facts_label'})
print table
Upvotes: 24
Views: 37771
Reputation: 1166
Short answer.
If you already installed lxml
, just use it.
html.parser - BeautifulSoup(markup, "html.parser")
Advantages: Batteries included, Decent speed, Lenient (as of Python 2.7.3 and 3.2.)
Disadvantages: Not very lenient (before Python 2.7.3 or 3.2.2)
lxml - BeautifulSoup(markup, "lxml")
Advantages: Very fast, Lenient
Disadvantages: External C dependency
html5lib - BeautifulSoup(markup, "html5lib")
Advantages: Extremely lenient, Parses pages the same way a web browser does, Creates valid HTML5
Disadvantages: Very slow, External Python dependency
Upvotes: 43
Reputation: 473763
There is a special paragraph in BeautifulSoup
documentation called Differences between parsers, it states that:
Beautiful Soup presents the same interface to a number of different parsers, but each parser is different. Different parsers will create different parse trees from the same document. The biggest differences are between the HTML parsers and the XML parsers.
The differences become clear on non well-formed HTML documents.
The moral is just that you should use the parser that works in your particular case.
Also note that you should always explicitly specify which parser are you using. This would help you to avoid surprises when running the code on different machines or virtual environments.
Upvotes: 34