Shadic Mersal
Shadic Mersal

Reputation: 134

'utf-8' codec can't decode byte 0xa3 in position 28: invalid start byte

I am trying to read a CSV file from google drive with pandas library.
However, I've got a problem "UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa3 in position 28: invalid start byte"

My code

df = pd.read_csv("/content/gdrive/My Drive/data/OnlineRetail.csv")

Output

---------------------------------------------------------------------------
UnicodeDecodeError                        Traceback (most recent call last)
pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_tokens()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_with_dtype()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._string_convert()

pandas/_libs/parsers.pyx in pandas._libs.parsers._string_box_utf8()

UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa3 in position 28: invalid start byte

During handling of the above exception, another exception occurred:

UnicodeDecodeError                        Traceback (most recent call last)
<ipython-input-6-65a06557fa8d> in <module>()
----> 1 df = pd.read_csv("/content/gdrive/My Drive/data/OnlineRetail.csv")

3 frames
/usr/local/lib/python3.7/dist-packages/pandas/io/parsers.py in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision)
    686     )
    687 
--> 688     return _read(filepath_or_buffer, kwds)
    689 
    690 

/usr/local/lib/python3.7/dist-packages/pandas/io/parsers.py in _read(filepath_or_buffer, kwds)
    458 
    459     try:
--> 460         data = parser.read(nrows)
    461     finally:
    462         parser.close()

/usr/local/lib/python3.7/dist-packages/pandas/io/parsers.py in read(self, nrows)
   1196     def read(self, nrows=None):
   1197         nrows = _validate_integer("nrows", nrows)
-> 1198         ret = self._engine.read(nrows)
   1199 
   1200         # May alter columns / col_dict

/usr/local/lib/python3.7/dist-packages/pandas/io/parsers.py in read(self, nrows)
   2155     def read(self, nrows=None):
   2156         try:
-> 2157             data = self._reader.read(nrows)
   2158         except StopIteration:
   2159             if self._first_chunk:

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.read()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_low_memory()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_rows()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_column_data()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_tokens()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_with_dtype()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._string_convert()

pandas/_libs/parsers.pyx in pandas._libs.parsers._string_box_utf8()

UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa3 in position 28: invalid start byte

Upvotes: 6

Views: 28248

Answers (3)

Gelzone
Gelzone

Reputation: 21

I met this situation(0xa3 issue) before, and I think this is an encoding issue.
If your encoding is set with 'utf-8' or 'gbk', then you can try encoding='ISO-8859-1'.
Good luck!

Upvotes: 1

jwhoakley
jwhoakley

Reputation: 21

I've just had the same issue. CSV file written out by an online service. Opened in Atom notebook, encoding is UTF-8. But you count to the character which it identifies and it was "�" when it should have been "£". Find and replace all, and it is fixed.

Good luck.

Upvotes: 2

Had the same issue. It may be not utf-8 encoding. Try to figure out what is it. You can do this by oening it in Notepad++.There are encoding menu on the top, look what is picked.

Upvotes: 2

Related Questions