Reputation: 51
I have a text file, its size is more than 200 MB. I want to read it and then want to select 30 most frequently used words. When i run it, it give me error. The code is as under:-
import sys, string
import codecs
from collections import Counter
import collections
import unicodedata
with open('E:\\Book\\1800.txt', "r", encoding='utf-8') as File_1800:
for line in File_1800:
sepFile_1800 = line.lower()
words_1800 = re.findall('\w+', sepFile_1800)
for wrd_1800 in [words_1800]:
long_1800=[w for w in wrd_1800 if len(w)>3]
common_words_1800 = dict(Counter(long_1800).most_common(30))
print(common_words_1800)
Traceback (most recent call last):
File "C:\Python34\CommonWords.py", line 14, in <module>
for line in File_1800:
File "C:\Python34\lib\codecs.py", line 313, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa3 in position
3784: invalid start byte
Upvotes: 0
Views: 222
Reputation: 4282
Try encoding='latin1'
instead of utf-8
Also, in these lines:
for line in File_1800:
sepFile_1800 = line.lower()
words_1800 = re.findall('\w+', sepFile_1800)
for wrd_1800 in [words_1800]:
...
The script is re-assigning the matches of re.findall
to the words_1800
variable for every line. So when you get to for wrd_1800 in [words_1800]
, the words_1800
variable only has matches from the very last line.
If you want to make minimal changes, initialize an empty list before iterating through the file:
words_1800 = []
And then add the matches for each line to the list, rather than replacing the list:
words_1800.extend(re.findall('\w+', sepFile_1800))
Then you can do (without the second for loop):
long_1800 = [w for w in words_1800 if len(w) > 3]
common_words_1800 = dict(Counter(long_1800).most_common(30))
print(common_words_1800)
Upvotes: 0
Reputation: 2823
The file does not contain 'UTF-8'
encoded data. Find the correct encoding and update the line: with open('E:\\Book\\1800.txt', "r", encoding='correct_encoding')
Upvotes: 1