Datacrawler
Datacrawler

Reputation: 2876

Delete rows if rows (not columns separately) contain a string

I import data from a CSV where I am replacing the empty fields with an 'EMPTYFIELD' value.

pd.read_csv('myFile.csv', usecols=['AAA', 'BBB', 'CCC'])
df =  df.fillna('EMPTYFIELD')

I am trying to create a dataframe that will have all the rows that contain an 'EMPTYFIELD' value. That implies that at least one column contains this value. I used the following and it works off course:

error = df[df.AAA.str.contains('EMPTYFIELD')]
error = error[error.BBB.str.contains('EMPTYFIELD')]
error = error[error.CCC.str.contains('EMPTYFIELD')] 

Now, I am trying to reduct the lines in my code. So, I was thinking of using a lambda instead without referencing to the columns (ideal):

error2 = df.apply(lambda x: 'EMPTYFIELD' if 'EMPTYFIELD' in x else x)

#error2 = df.apply(lambda x : any([ isinstance(e, 'EMPTYFIELD') for e in x ]), axis=1) 

and then I tried referencing the columns too:

error2 = df[usecols].apply(lambda x: 'EMPTYFIELD' if 'EMPTYFIELD' in x else x)

and

error2 = df[df[usecols].isin(['EMPTYFIELD'])]

None of the above work. I print the results in a new CSV file. I can see all the rows even if they contain the 'EMPTYFIELD' value.

UPD: This is my extended code. Some of the answers return an error possible because of the lines below:

varA      = 'AAA';
dfGrouped = df.groupby(varA, as_index=False).agg({'Start Date': 'min', 'End Date': 'max'}).copy()

varsToKeep = ['AAA', 'BBB', 'CCC', 'Start Date_grp', 'End Date_grp' ]
dfTemp = pd.merge(df, dfGrouped, how='inner', on='AAA', suffixes=(' ', '_grp'), copy=True)[varsToKeep]

errors = dfTemp[~np.logical_or.reduce([dfTemp[varsToKeep].str.contains('EMPTYFIELD') for varsToKeep in dfTemp])]

Upvotes: 0

Views: 81

Answers (3)

jpp
jpp

Reputation: 164773

One way is to use np.logical_or.reduce. Here is an example:

import pandas as pd, numpy as np

df = pd.DataFrame([['A', 'B', 'C', 'D'],
                   ['E', 'F', 'G', 'H'],
                   ['G', 'A', 'D', 'I'],
                   ['L', 'K', 'A', 'J'],
                   ['S', 'T', 'U', 'V']],
                  columns=['COL1', 'COL2', 'COL3' ,'COL4'])

df[~np.logical_or.reduce([df[col].astype(str).str.contains('A') for col in df])]

#   COL1 COL2 COL3 COL4
# 1    E    F    G    H
# 4    S    T    U    V

Upvotes: 1

pault
pault

Reputation: 43524

Here's an illustration of how to use dropna() as I mentioned in the comments:

df = pd.DataFrame(
    {'A': [5,3,5,6], 
     'B': [None, "foo", "bar", "foobar"], 
     'C': ["foo","bar",None, "bat"]
    }
)
no_errors = df.dropna()
errors = df[~(df.index.isin(no_errors.index))]

Which results in the following 2 dataframes:

print(no_errors)
#   A       B    C
#1  3     foo  bar
#3  6  foobar  bat

print(errors)
#   A     B     C
#0  5  None   foo
#2  5   bar  None

Now if you want, you can call fillna() on the error DataFrame.

Upvotes: 1

BENY
BENY

Reputation: 323326

As I mention using apply , data from jp

df[~df.apply(lambda x : x.str.contains('A')).any(1)]
Out[491]: 
  COL1 COL2 COL3 COL4
1    E    F    G    H
4    S    T    U    V

Upvotes: 0

Related Questions