Reputation: 883
I have a data frame with alpha-numeric keys which I want to save as a csv and read back later. For various reasons I need to explicitly read this key column as a string format, I have keys which are strictly numeric or even worse, things like: 1234E5 which Pandas interprets as a float. This obviously makes the key completely useless.
The problem is when I specify a string dtype for the data frame or any column of it I just get garbage back. I have some example code here:
df = pd.DataFrame(np.random.rand(2,2),
index=['1A', '1B'],
columns=['A', 'B'])
df.to_csv(savefile)
The data frame looks like:
A B
1A 0.209059 0.275554
1B 0.742666 0.721165
Then I read it like so:
df_read = pd.read_csv(savefile, dtype=str, index_col=0)
and the result is:
A B
B ( <
Is this a problem with my computer, or something I'm doing wrong here, or just a bug?
Upvotes: 76
Views: 204094
Reputation: 9865
They all didn't work for me with newer Pandas (pd.__version__
= 2.2.2).
The only thing what really helped was:
import pandas as pd
df = pd.read_csv(file, dtype=pd.StringDtype())
Passing str or np.str results in error. Also the other solutions didn't work!
Upvotes: 0
Reputation: 412
Nowadays, (pandas==1.0.5) it just works.
pd.read_csv(f, dtype=str)
will read everything as string Except for NAN values.
Here is the list of values that will be parse to NAN : empty string, ‘#N/A’, ‘#N/A N/A’, ‘#NA’, ‘-1.#IND’, ‘-1.#QNAN’, ‘-NaN’, ‘-nan’, ‘1.#IND’, ‘1.#QNAN’, ‘’, ‘N/A’, ‘NA’, ‘NULL’, ‘NaN’, ‘n/a’, ‘nan’, ‘null’
If you don't want this strings to be parse as NAN use na_filter=False
Upvotes: 30
Reputation: 16620
Many of the above answers are fine but neither very elegant nor universal. If you want to read all of the columns as strings you can use the following construct without caring about the number of the columns.
from collections import defaultdict
import pandas as pd
pd.read_csv(file_or_buffer, converters=defaultdict(lambda i: str))
The defaultdict
will return str
for every index passed into converters
.
Upvotes: 1
Reputation: 351
Use a converter that applies to any column if you don't know the columns before hand:
import pandas as pd
class StringConverter(dict):
def __contains__(self, item):
return True
def __getitem__(self, item):
return str
def get(self, default=None):
return str
pd.read_csv(file_or_buffer, converters=StringConverter())
Upvotes: 4
Reputation: 375377
Update: this has been fixed: from 0.11.1 you passing str
/np.str
will be equivalent to using object
.
Use the object dtype:
In [11]: pd.read_csv('a', dtype=object, index_col=0)
Out[11]:
A B
1A 0.35633069074776547 0.745585398803751
1B 0.20037376323337375 0.013921830784260236
or better yet, just don't specify a dtype:
In [12]: pd.read_csv('a', index_col=0)
Out[12]:
A B
1A 0.356331 0.745585
1B 0.200374 0.013922
but bypassing the type sniffer and truly returning only strings requires a hacky use of converters
:
In [13]: pd.read_csv('a', converters={i: str for i in range(100)})
Out[13]:
A B
1A 0.35633069074776547 0.745585398803751
1B 0.20037376323337375 0.013921830784260236
where 100
is some number equal or greater than your total number of columns.
It's best to avoid the str dtype, see for example here.
Upvotes: 69
Reputation: 2962
Like Anton T said in his comment, pandas
will randomly turn object
types into float
types using its type sniffer, even you pass dtype=object
, dtype=str
, or dtype=np.str
.
Since you can pass a dictionary of functions where the key is a column index and the value is a converter function, you can do something like this (e.g. for 100 columns).
pd.read_csv('some_file.csv', converters={i: str for i in range(0, 100)})
You can even pass range(0, N)
for N much larger than the number of columns if you don't know how many columns you will read.
Upvotes: 14