Reputation: 103
My dataframe, df:
Col_A Col_B
Apple [1,2,3]
Banana [1,null,4]
Orange [6,null,null]
The nulls are represented as np.nan
s in the dataframe. How can I check if all the values in each row of Col_B are null?
This is my np.where
:
np.where(pd.isnull(df[Col_B]).all(), "Fail", "Pass")
However, I get a truth value is ambiguous even though I have a ".all()" there. Any way to get around this issue?
Upvotes: 2
Views: 1228
Reputation: 317
I don't believe you need to use np.where()
here. You can try it two other ways:
df['one_bad'] = df['Col_B'].apply(lambda x: np.nan in x)
will return True
if there's at least one np.nan in the list.
But you can use np.isnan().all()
to check if all values are np.nan
.
df['all_bad'] = df['Col_B'].apply(lambda x: np.isnan(x).all())
will return True
if all the values in the list are np.nan
and False
otherwise.
Upvotes: 2