John
John

Reputation: 35

Python pandas read gzipped csv from GitHub

I am currently trying to read a large compressed CSV file which I uploaded to Github.

below is the code I'm using to do so

import pandas as pd

url = 'https://github.com/thefabscientist/Waterloo-DS3-Group-1-Project/blob/main/CanadaLabourData.csv.gz?raw=true'
df = pd.read_csv(url,  encoding = "ISO-8859-1")
df.head()

but I get the following error:

UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte

UnicodeDecodeError                        Traceback (most recent call last)
<command-2539456448276631> in <module>
      1 url = 'https://github.com/thefabscientist/Waterloo-DS3-Group-1-Project/blob/main/CanadaLabourData.csv.gz?raw=true'
----> 2 df = pd.read_csv(url)
      3 df.head()

/databricks/python/lib/python3.7/site-packages/pandas/io/parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision)
    674         )
    675 
--> 676         return _read(filepath_or_buffer, kwds)
    677 
    678     parser_f.__name__ = name

/databricks/python/lib/python3.7/site-packages/pandas/io/parsers.py in _read(filepath_or_buffer, kwds)
    446 
    447     # Create the parser.
--> 448     parser = TextFileReader(fp_or_buf, **kwds)
    449 
    450     if chunksize or iterator:

/databricks/python/lib/python3.7/site-packages/pandas/io/parsers.py in __init__(self, f, engine, **kwds)
    878             self.options["has_index_names"] = kwds["has_index_names"]
    879 
--> 880         self._make_engine(self.engine)
    881 
    882     def close(self):

/databricks/python/lib/python3.7/site-packages/pandas/io/parsers.py in _make_engine(self, engine)
   1112     def _make_engine(self, engine="c"):
   1113         if engine == "c":
-> 1114             self._engine = CParserWrapper(self.f, **self.options)
   1115         else:
   1116             if engine == "python":

/databricks/python/lib/python3.7/site-packages/pandas/io/parsers.py in __init__(self, src, **kwds)
   1889         kwds["usecols"] = self.usecols
   1890 
-> 1891         self._reader = parsers.TextReader(src, **kwds)
   1892         self.unnamed_cols = self._reader.unnamed_cols
   1893 

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.__cinit__()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._get_header()

UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte

so I added

df = pd.read_csv(url,  encoding = "ISO-8859-1")

but I get:

ParserError: Error tokenizing data. C error: Expected 2 fields in line 4, saw 3
---------------------------------------------------------------------------
ParserError                               Traceback (most recent call last)
<command-2539456448276631> in <module>
      1 url = 'https://github.com/thefabscientist/Waterloo-DS3-Group-1-Project/blob/main/CanadaLabourData.csv.gz?raw=true'
----> 2 df = pd.read_csv(url,  encoding = "ISO-8859-1")
      3 df.head()

/databricks/python/lib/python3.7/site-packages/pandas/io/parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision)
    674         )
    675 
--> 676         return _read(filepath_or_buffer, kwds)
    677 
    678     parser_f.__name__ = name

/databricks/python/lib/python3.7/site-packages/pandas/io/parsers.py in _read(filepath_or_buffer, kwds)
    452 
    453     try:
--> 454         data = parser.read(nrows)
    455     finally:
    456         parser.close()

/databricks/python/lib/python3.7/site-packages/pandas/io/parsers.py in read(self, nrows)
   1131     def read(self, nrows=None):
   1132         nrows = _validate_integer("nrows", nrows)
-> 1133         ret = self._engine.read(nrows)
   1134 
   1135         # May alter columns / col_dict

/databricks/python/lib/python3.7/site-packages/pandas/io/parsers.py in read(self, nrows)
   2035     def read(self, nrows=None):
   2036         try:
-> 2037             data = self._reader.read(nrows)
   2038         except StopIteration:
   2039             if self._first_chunk:

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.read()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_low_memory()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_rows()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._tokenize_rows()

pandas/_libs/parsers.pyx in pandas._libs.parsers.raise_parser_error()

ParserError: Error tokenizing data. C error: Expected 2 fields in line 4, saw 3

lastly I tried this:

df = pd.read_csv(url,  encoding = "ISO-8859-1", engine='python', error_bad_lines=False)

and got absolute garbage

Skipping line 65100: Expected 3 fields in line 65100, saw 4

    Ù´&ÅI^?¸$n'cVÕy][;öyís<¯çÿýûxüù¯]?}üáÕ_<þ®ýùÙϾú¬ýóëÅo»c3{±k«ÙòÕ¢Y «f½?¬ûöëowÍáÕìúuûÑr±}µ½9îÍÛÙ£Íb¿_¿h?<¬wÛÙ·ûÃêõì£o¯¿zôäãö¿{²ú«ýyýr5{ÙìoÚígß}~þÚýÞ'®¿¾þá×ϯ=ýî~?}÷ããÓ¾ûîϾú6qÿxýõ³øç§×O=?óéw_·OÿðM÷o}=~ôÕ7×_?ùðep6þ_ÛÅÍ¢Ý !íµBH)Ißÿ?»Ãbs5[l6³uº^Åðé®=ûÕ_Ý4³·«E³-¶7³Ý«¦ýìûU³ßmã·¤CûóðjwÜ·ßÇOTûϤ¬R&nÊù鯸-tóøiÿ·û°Nðce°ÖG:noÃE©c6ïÏÌtܶZ¹»³)­v·-½ûŦ.
ZNw#reû8Å°êǸ899ÓWıYÇï«¿¬}±ÛÝì?yÓìnËõöe;b »h/òðA'{¡ ù¹¾vQ<{É c<{ætæÒö40§VÒÕÛ¸»¡þÂ+Æ4ÜÝØ...
©[»àNWYqU©ê]&HTq~.JÅ+ê]ªÔR  None    None
'*{:\thXE(*K&Íô¤q\t°xaµâ<2\NüÂÖíªSA+pò"µc9¬<¬ÁH*K"ÇQ[kTF_èûL#Á\SÄÓõïñÍ1½£ö¶ÔJ¤S;·/:xÅÝ9rMss§FQoiÎqì;Íz¬é¶»ß÷.ð#ª>¼ûÖE¢úÑà4Â3±Ì\tPÖ;G°ENsbÕýër×9dõqSÇßVwænúèèÛ¯¿ç9q0Î2èô0÷ì3Ƥ8ÇSP;À´ÑsÐ0#â<ãNa    None    None
éHé>^¨¿p÷y6j2Är%Jð×ËåîõëÝMß¹Ù~³»aU5­HõìÉdi2øn¸zTp½l®k[¼[®<ÖH¤ûP«<°Ù»E *QC*äJWõuÍwó#êPJNOÝeµÞv¼QRB2øÚ}7<¨¥qþ"5Ìwóã  None    None
P   öG"DñãÝ    None

Can anyone please help? TYIA

Upvotes: 2

Views: 564

Answers (1)

Randy
Randy

Reputation: 14847

You're looking for compression='gzip':

In [2]: import pandas as pd
   ...:
   ...: url = 'https://github.com/thefabscientist/Waterloo-DS3-Group-1-Project/blob/main/CanadaLabourData.csv.gz?raw=tr
   ...: ue'
   ...: df = pd.read_csv(url, compression='gzip')
   ...: df.head()
   ...:
Out[2]:
   REF_DATE     GEO           DGUID Labour force characteristics  ... STATUS SYMBOL TERMINATED DECIMALS
0      1976  Canada  2016A000011124                 Labour force  ...    NaN    NaN        NaN        1
1      1976  Canada  2016A000011124                 Labour force  ...    NaN    NaN        NaN        1
2      1976  Canada  2016A000011124                 Labour force  ...    NaN    NaN        NaN        1
3      1976  Canada  2016A000011124                 Labour force  ...    NaN    NaN        NaN        1
4      1976  Canada  2016A000011124                 Labour force  ...    NaN    NaN        NaN        1

[5 rows x 18 columns]

Upvotes: 1

Related Questions