Reputation: 5407
I am trying to train some text data using scikit. The same code is being used on other PC without any error but on my system it gives error:
File "/root/Desktop/karim/svn/questo-anso/v5/trials/classify/domain_detection_final/test_classifier_temp.py", line 130, in trainClassifier
X_train = self.vectorizer.fit_transform(self.data_train.data)
File "/root/Desktop/karim/software/scikit-learn-0.15.1/sklearn/feature_extraction/text.py", line 1270, in fit_transform
X = super(TfidfVectorizer, self).fit_transform(raw_documents)
File "/root/Desktop/karim/software/scikit-learn-0.15.1/sklearn/feature_extraction/text.py", line 808, in fit_transform
vocabulary, X = self._count_vocab(raw_documents, self.fixed_vocabulary)
File "/root/Desktop/karim/software/scikit-learn-0.15.1/sklearn/feature_extraction/text.py", line 741, in _count_vocab
for feature in analyze(doc):
File "/root/Desktop/karim/software/scikit-learn-0.15.1/sklearn/feature_extraction/text.py", line 233, in <lambda>
tokenize(preprocess(self.decode(doc))), stop_words)
File "/root/Desktop/karim/software/scikit-learn-0.15.1/sklearn/feature_extraction/text.py", line 111, in decode
doc = doc.decode(self.encoding, self.decode_error)
File "/usr/lib/python2.7/encodings/utf_8.py", line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xba in position 1266: invalid start byte
I already checked similar threads but no helps.
UPDATE:
self.data_train = self.fetch_data(cache, subset='train')
if not os.path.exists(self.root_dir+"/autocreated/vectorizer.txt"):
self.vectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5,
stop_words='english')
start_time = time()
print("Transforming the dataset")
X_train = self.vectorizer.fit_transform(self.data_train.data) // Error is here
joblib.dump(self.vectorizer, self.root_dir+"/autocreated/vectorizer.txt")
Upvotes: 4
Views: 15395
Reputation: 5407
There was issue in dealing with the training data. One thing that solved my issue is ignoring error
using decode_error='ignore'
, there might be some other solutions.
self.vectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5,stop_words='english',decode_error='ignore')
Upvotes: 3
Reputation: 174708
Your file is actually encoded in ISO-8869-1, not UTF-8. You need to properly decode it before you can encode it again.
0xBA is the numero sign (º
) in ISO-8869-1.
Upvotes: 7