Reputation: 1415
I've got a pandas DataFrame that looks like this:
molecule species
0 a [dog]
1 b [horse, pig]
2 c [cat, dog]
3 d [cat, horse, pig]
4 e [chicken, pig]
and I like to extract a DataFrame containing only thoses rows, that contain any of selection = ['cat', 'dog']
. So the result should look like this:
molecule species
0 a [dog]
1 c [cat, dog]
2 d [cat, horse, pig]
What would be the simplest way to do this?
For testing:
selection = ['cat', 'dog']
df = pd.DataFrame({'molecule': ['a','b','c','d','e'], 'species' : [['dog'], ['horse','pig'],['cat', 'dog'], ['cat','horse','pig'], ['chicken','pig']]})
Upvotes: 49
Views: 81633
Reputation: 8768
One option is to use str.get_dummies()
df.loc[df['species'].str.join('|').str.get_dummies()[selection].any(axis=1)]
or explode()
df.loc[df['species'].explode().isin(selection).groupby(level=0).any()]
or set.intersection()
df.loc[df['species'].map(lambda x: set(x).intersection(selection)).str.len().gt(0)]
Output:
molecule species
0 a [dog]
2 c [cat, dog]
3 d [cat, horse, pig]
Upvotes: 0
Reputation: 11
I think this would help to filter out what you want.
new_df = df[~df['species'].isin(filter_list)]
Upvotes: 0
Reputation: 323226
IIUC Re-create your df then using isin
with any
should be faster than apply
df[pd.DataFrame(df.species.tolist()).isin(selection).any(1).values]
Out[64]:
molecule species
0 a [dog]
2 c [cat, dog]
3 d [cat, horse, pig]
Upvotes: 35
Reputation: 523
This is an easy and basic approach.
You can create a function that checks if the elements in Selection list
are present in the pandas column list
.
def check(speciesList):
flag = False
for animal in selection:
if animal in speciesList:
flag = True
return flag
You could then use this list
to create a column that contains True
or False
based on whether the record contains at least one element in Selection List and create a new data frame based on it.
df['containsCatDog'] = df.species.apply(lambda animals: check(animals))
newDf = df[df.containsCatDog == True]
I hope it helps.
Upvotes: 1
Reputation: 27
import pandas as pd
import numpy as np
selection = ['cat', 'dog']
df = pd.DataFrame({'molecule': ['a','b','c','d','e'], 'species' : [['dog'], ['horse','pig'],['cat', 'dog'], ['cat','horse','pig'], ['chicken','pig']]})
df1 = df[df['species'].apply((lambda x: 'dog' in x) )]
df2=df[df['species'].apply((lambda x: 'cat' in x) )]
frames = [df1, df2]
result = pd.concat(frames,join='inner',ignore_index=False)
print("result",result)
result = result[~result.index.duplicated(keep='first')]
print(result)
Upvotes: 1
Reputation: 523
Using pandas str.contains
(uses regular expression):
df[~df["species"].str.contains('(cat|dog)', regex=True)]
Output:
molecule species
1 b [horse, pig]
4 e [chicken, pig]
Upvotes: 12
Reputation: 38415
Using Numpy would be much faster than using Pandas in this case,
Option 1: Using numpy intersection,
mask = df.species.apply(lambda x: np.intersect1d(x, selection).size > 0)
df[mask]
450 µs ± 21.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
molecule species
0 a [dog]
2 c [cat, dog]
3 d [cat, horse, pig]
Option2: A similar solution as above using numpy in1d,
df[df.species.apply(lambda x: np.any(np.in1d(x, selection)))]
420 µs ± 17.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Option 3: Interestingly, using pure python set is quite fast here
df[df.species.apply(lambda x: bool(set(x) & set(selection)))]
305 µs ± 5.22 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Upvotes: 9
Reputation: 2287
You can use mask
with apply
here.
selection = ['cat', 'dog']
mask = df.species.apply(lambda x: any(item for item in selection if item in x))
df1 = df[mask]
For the DataFrame you've provided as an example above, df1 will be:
molecule species
0 a [dog]
2 c [cat, dog]
3 d [cat, horse, pig]
Upvotes: 21