jonas
jonas

Reputation: 51

Python pandas - what is the proper way to NaN all zeros before first non-zero value in multiple columns?

I have a df with columns date, a, b and an id. The id is grouping and the date values repeat when going to a new id. In column a and b I want to replace 0 with nan before first non-zero value within each id. So with the following data:

df = pd.DataFrame({
    'date': ['2019-01-01', '2019-02-01', '2019-03-01', '2019-04-01', '2019-05-01']*3,
    'id': [0,0,0,0,0,1,1,1,1,1,2,2,2,2,2],
    'a': [0,0,10,40,20,0,0,0,50,90,0,0,0,0,0],
    'b': [0,0,0,123,345,0,0,555,0,666,0,0,0,0,30]
})

          date  id   a    b
0   2019-01-01   0   0    0
1   2019-02-01   0   0    0
2   2019-03-01   0  10    0
3   2019-04-01   0  40  123
4   2019-05-01   0  20  345
5   2019-01-01   1   0    0
6   2019-02-01   1   0    0
7   2019-03-01   1   0  555
8   2019-04-01   1  50    0
9   2019-05-01   1  90  666
10  2019-01-01   2   0    0
11  2019-02-01   2   0    0
12  2019-03-01   2   0    0
13  2019-04-01   2   0    0
14  2019-05-01   2   0   30

The output should be

          date  id     a      b
0   2019-01-01   0   NaN    NaN
1   2019-02-01   0   NaN    NaN
2   2019-03-01   0  10.0    NaN
3   2019-04-01   0  40.0  123.0
4   2019-05-01   0  20.0  345.0
5   2019-01-01   1   NaN    NaN
6   2019-02-01   1   NaN    NaN
7   2019-03-01   1   NaN  555.0
8   2019-04-01   1  50.0    0.0
9   2019-05-01   1  90.0  666.0
10  2019-01-01   2   0.0    NaN
11  2019-02-01   2   0.0    NaN
12  2019-03-01   2   0.0    NaN
13  2019-04-01   2   0.0    NaN
14  2019-05-01   2   0.0   30.0

Observe that if all values for a given id within a column are zeros, then keep the zeroes.

My current solution is 2 for-loops: one for the columns and one for groupby objects on id; a solution where I would believe there are rooms for improvement. Any hints/help would be very much appreciated.

for col in ['a', 'b']:
    for i, grp in df.groupby('id'):
        min_idx = grp.index.min()
        non_z_idx = grp[grp[col] > 0].index.min()

        if not np.isnan(non_z_idx):
            df.loc[min_idx:non_z_idx - 1, col] = np.nan

Upvotes: 2

Views: 206

Answers (1)

Andy L.
Andy L.

Reputation: 25269

use 2 masks with cummax and transform then df.where

m1 = df[['a','b']].ne(0).groupby(df.id).cummax() #false until > 0 for grp
m2 = df[['a','b']].eq(0).groupby(df.id).transform('all') #true if never > 0 for grp

df[['a','b']] = df[['a','b']].where(m1 | m2)

Out[88]:
          date  id     a      b
0   2019-01-01   0   NaN    NaN
1   2019-02-01   0   NaN    NaN
2   2019-03-01   0  10.0    NaN
3   2019-04-01   0  40.0  123.0
4   2019-05-01   0  20.0  345.0
5   2019-01-01   1   NaN    NaN
6   2019-02-01   1   NaN    NaN
7   2019-03-01   1   NaN  555.0
8   2019-04-01   1  50.0    0.0
9   2019-05-01   1  90.0  666.0
10  2019-01-01   2   0.0    NaN
11  2019-02-01   2   0.0    NaN
12  2019-03-01   2   0.0    NaN
13  2019-04-01   2   0.0    NaN
14  2019-05-01   2   0.0   30.0

If you don't want 2 groupby's, you may use one groupby with apply

m = df[['a','b']].ne(0).groupby(df.id).apply(lambda x: x.cummax() | ~x.any())
df[['a','b']] = df[['a','b']].where(m)

Upvotes: 1

Related Questions