Reputation: 1223
Im trying to scrap the contents of a website. However in the output im getting unwanted spaces and hence im not able to interpret this output. Im using a simple code :
import urllib2
from bs4 import BeautifulSoup
html= 'http://idlebrain.com/movie/archive/index.html'
soup = BeautifulSoup(urllib2.urlopen(html).read())
print(soup.prettify(formatter=None))
OUTPUT::(output is very large so a small part of it in order to understand what problem im facing)
<html><head><title>Telugu cinema reviews by Jeevi - idlebrain.com</title>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type"/>
</head><bodybgcolor="#FFFFFF" leftmargin="0" marginheight="0" marginwidth="0" topmargin="0"><table border="0" cellpadding="0" cellspacing="0" width="96%">
<tr>
<td align="left"> <img alt="Idlebrain.Com" height="63" src="../../image/vox_r01_c2.gif"width="264"/></td>
<td><div align="right"><script type="text/javascript"><!--
g o o g l e _ a d _ c l i e n t = " c a - p u b - 8 8 6 3 7 1 8 7 5 2 0 4 9 7 3 9 " ;
/ * r e v i e w s - h o r * /
g o o g l e _ a d _ s l o t = " 1 6 4 8 6 2 0 2 7 3 " ;
g o o g l e _ a d _ w i d t h = 7 2 8 ;
g o o g l e _ a d _ h e i g h t = 9 0 ;
/ / - - >
< / s c r i p t >
< s c r i p t t y p e = " t e x t / j a v a s c r i p t "
s r c = " h t t p : / / p a g e a d 2 . g o o g l e s y n d i c a t i o n . c o m / p a g e a d / s h o w _ a d s . j s " >
< / s c r i p t >
< / d i v >
< / t d >
< / t r >
< / t a b l e >
< t a b l e w i d t h = " 9 6 % " b o r d e r = " 0 " c e l l s p a c i n g = " 0 " c e l l p a d d i n g = " 0 " >
< t r >
< t d w i d t h = " 1 2 8 " v a l i g n = " t o p " a l i g n = " l e f t " >
< t a b l e b o r d e r = " 0 " c e l l p a d d i n g = " 0 " c e l l s p a c i n g = " 0 " w i d t h = " 1 1 9 " >
< / t r >
< / t a b l e >
< / b o d y >
< / h t m l >
</script></div></td></tr></table></body></html>
Thanks!!!!
Upvotes: 2
Views: 736
Reputation: 2991
You can specify the parser as html.parser
:
soup = BeautifulSoup(urllib2.urlopen(html).read(), 'html.parser')
Or you can specify the html5
parser:
soup = BeautifulSoup(urllib2.urlopen(html).read(), 'html5')
Haven't installed the html5
parser yet? Install it from command-line:
sudo apt-get install python-html5lib
Also you may use the xml
parser but you may see some differences in multi-valued attributes like class="foo bar"
:
soup = BeautifulSoup(urllib2.urlopen(html).read(), 'xml')
Upvotes: 1
Reputation: 781
This is probably a duplicate of BeautifulSoup not reading documents correctly, i.e. was caused by a bug in BS 4.0.2.
That bug has been fixed in 4.0.3. You might want to check the output of
>>> import bs4
>>> bs4.__version__
I suspect it's 4.0.2 for your system's BeautifulSoup, while it's 4.0.3 (or later) in your virtualenv. So if you want your code to run properly on your system, upgrade BeautifulSoup to a later version.
Upvotes: 0
Reputation: 1223
I solved it , but dont know the reason exactly. I installed virtualenv and ran my program inside it. and it worked perfectly.
Upvotes: 0