Reputation: 5155
I know how to check for a 'nan' value in in column 'A' of dataframe 'df' as follows
df['A'].isnull().values.any()
but how can I check for a 'string', and I mean any string, since i do not know what the string text is, and then also to know which row it was found in?
Upvotes: 2
Views: 2623
Reputation: 488
If you are using Python 3, you can use a list comprehension and numpy.any
import numpy as np
np.any([isinstance(val, str) for val in df['A']])
If you are using Python 2, I believe that you need to replace str with basestring.
Upvotes: 3
Reputation: 210832
I would use vectorized Pandas approach:
Assuming we have the following DF:
In [116]: df = pd.DataFrame({'a':[1,2,'aaa', 3.14, 2.71], 'b':['2016-01-01', 'bbb', '2016-02-02', '2016-03-03', 'ZZZ']})
In [117]: df
Out[117]:
a b
0 1 2016-01-01
1 2 bbb
2 aaa 2016-02-02
3 3.14 2016-03-03
4 2.71 ZZZ
In [118]: df.dtypes
Out[118]:
a object
b object
dtype: object
check for strings in the column that supposed to be numeric:
In [119]: pd.to_numeric(df.a, errors='coerce')
Out[119]:
0 1.00
1 2.00
2 NaN
3 3.14
4 2.71
Name: a, dtype: float64
In [120]: pd.to_numeric(df.a, errors='coerce').isnull()
Out[120]:
0 False
1 False
2 True
3 False
4 False
Name: a, dtype: bool
In [121]: df.loc[pd.to_numeric(df.a, errors='coerce').isnull()]
Out[121]:
a b
2 aaa 2016-02-02
check for strings in the column that supposed to be datetime-like:
In [122]: pd.to_datetime(df.b, errors='coerce')
Out[122]:
0 2016-01-01
1 NaT
2 2016-02-02
3 2016-03-03
4 NaT
Name: b, dtype: datetime64[ns]
In [123]: df.loc[pd.to_datetime(df.b, errors='coerce').isnull()]
Out[123]:
a b
1 2 bbb
4 2.71 ZZZ
Upvotes: 0