Vishal Khialani
Vishal Khialani

Reputation: 2587

utf8 codec can't decode byte 0x96 in python

I am trying to check if a certain word is on a page for many sites. The script runs fine for say 15 sites and then it stops.

UnicodeDecodeError: 'utf8' codec can't decode byte 0x96 in position 15344: invalid start byte

I did a search on stackoverflow and found many issues on it but I can't seem to understand what went wrong in my case.

I would like to either solve it or if there is an error skip that site. Pls advice how I can do this as I am new and the below code itself has taken me a day to write. By the way the site which the script halted on was http://www.homestead.com

filetocheck = open("bloglistforcommenting","r")
resultfile = open("finalfile","w")

for countofsites in filetocheck.readlines():
        sitename = countofsites.strip()
        htmlfile = urllib.urlopen(sitename)
        page = htmlfile.read().decode('utf8')
        match = re.search("Enter your name", page)
        if match:
            print "match found  : " + sitename
            resultfile.write(sitename+"\n")

        else:
            print "sorry did not find the pattern " +sitename

print "Finished Operations"

As per Mark's comments I changed the code to implement beautifulsoup

htmlfile = urllib.urlopen("http://www.homestead.com")
page = BeautifulSoup((''.join(htmlfile)))
print page.prettify() 

now I am getting this error

page = BeautifulSoup((''.join(htmlfile)))
TypeError: 'module' object is not callable

I am trying their quick start example from http://www.crummy.com/software/BeautifulSoup/documentation.html#Quick%20Start. If I copy paste it then the code works fine.

I FINALLY got it to work. Thank you all for your help. Here is the final code.

import urllib
import re
from BeautifulSoup import BeautifulSoup

filetocheck = open("listfile","r")

resultfile = open("finalfile","w")
error ="for errors"

for countofsites in filetocheck.readlines():
        sitename = countofsites.strip()
        htmlfile = urllib.urlopen(sitename)
        page = BeautifulSoup((''.join(htmlfile)))  
        pagetwo =str(page) 
        match = re.search("Enter YourName", pagetwo)
        if match:
            print "match found  : " + sitename
            resultfile.write(sitename+"\n")

        else:
            print "sorry did not find the pattern " +sitename

print "Finished Operations"

Upvotes: 26

Views: 47427

Answers (3)

Jon Hanna
Jon Hanna

Reputation: 113242

The byte at 15344 is 0x96. Presumably at position 15343 there is either a single-byte encoding of a character, or the last byte of a multiple-byte encoding, making 15344 the start of a character. 0x96 is in binary 10010110, and any byte matching the pattern 10XXXXXX (0x80 to 0xBF) can only be a second or subsequent byte in a UTF-8 encoding.

Hence the stream is either not UTF-8 or else is corrupted.

Examining the URI you link to, we find the header:

Content-Type: text/html

Since there is no encoding stated, we should use the default for HTTP, which is ISO-8859-1 (aka "Latin 1").

Examining the content we find the line:

<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">

Which is a fall-back mechanism for people who are, for some reason, unable to set their HTTP headings correctly. This time we are explicitly told the character encoding is ISO-8859-1.

As such, there's no reason to expect reading it as UTF-8 to work.

For extra fun though, when we consider that in ISO-8859-1 0x96 encodes U+0096 which is the control character "START OF GUARDED AREA" we find that ISO-8859-1 isn't correct either. It seems the people creating the page made a similar error to yourself.

From context, it would seem that they actually used Windows-1252, as in that encoding 0x96 encodes U+2013 (EN-DASH, looks like ).

So, to parse this particular page you want to decode in Windows-1252.

More generally, you want to examine headers when picking character encodings, and while it would perhaps be incorrect in this case (or perhaps not, more than a few "ISO-8859-1" codecs are actually Windows-1252), you'll be correct more often. You still need to have something catch failures like this by reading with a fallback. The decode method takes a second parameter called errors. The default is 'strict', but you can also have 'ignore', 'replace', 'xmlcharrefreplace' (not appropriate), 'backslashreplace' (not appropriate) and you can register your own fallback handler with codecs.register_error().

Upvotes: 30

Mark Byers
Mark Byers

Reputation: 838186

Many web pages are encoded incorrectly. For parsing HTML try BeautifulSoup as it can handle many types of incorrect HTML that are found in the wild.

Beautiful Soup is a Python HTML/XML parser designed for quick turnaround projects like screen-scraping. Three features make it powerful:

  1. Beautiful Soup won't choke if you give it bad markup. It yields a parse tree that makes approximately as much sense as your original document. This is usually good enough to collect the data you need and run away.

  2. Beautiful Soup provides a few simple methods and Pythonic idioms for navigating, searching, and modifying a parse tree: a toolkit for dissecting a document and extracting what you need. You don't have to create a custom parser for each application.

  3. Beautiful Soup automatically converts incoming documents to Unicode and outgoing documents to UTF-8. You don't have to think about encodings, unless the document doesn't specify an encoding and Beautiful Soup can't autodetect one. Then you just have to specify the original encoding.

Emphasis mine.

Upvotes: 9

Duncan
Duncan

Reputation: 95652

The site 'http://www.homestead.com' doesn't claim to be sending you utf-8, the response actually claims to be iso-8859-1:

<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">

You must use the correct encoding for the page you actually received, not just guess randomly.

Upvotes: 3

Related Questions