Reputation: 1680
I have a pandas dataframe of negative numbers and zeros, with a datetime index.
I'd like to be able to: (1) identify the start and enddate for non-consecutive, non-zero values; (2) the number of days between those two dates; (3) the minimum value between those two dates
For example, if my dataframe looks like this:
DATE VAL
2007-06-26 0.000000
2007-06-27 0.000000
2007-06-28 0.000000
2007-06-29 -0.006408
2007-07-02 0.000000
2007-07-03 0.000000
2007-07-04 -0.000003
2007-07-05 0.000000
2007-07-06 0.000000
2007-07-09 0.000000
2007-07-10 -0.018858
2007-07-11 -0.015624
2007-07-12 0.000000
2007-07-13 0.000000
2007-07-16 -0.008562
2007-07-17 -0.006587
I'd like output that looks something like this:
START END DAYS MIN
2007-06-29 2007-06-29 1 -0.006408
2007-07-04 2007-07-04 1 -0.000003
2007-07-10 2007-07-11 2 -0.018858
2007-07-16 2007-07-17 2 -0.008562
It would be preferable if the count of days excluded weekends (ie, 7/13 to 7/16 counts as 1 day), but I realize that's typically complicated.
The numpy.argmax/min
method appears to do a version of what I'd like, but setting axis=1
per the documentation didn't return the collection of index values I'd expected.
Edit: Should have specified, looking for a solution that doesn't require loops.
Upvotes: 3
Views: 770
Reputation: 53029
numpy
solution, df
is your example DataFrame:
# get data to numpy
date = df.index.to_numpy(dtype='M8[D]')
val = df['VAL'].to_numpy()
# find switches between zero/nonzero
on,off = np.diff(val!=0.0,prepend=False,append=False).nonzero()[0].reshape(-1,2).T
# use switch points to calculate all desired quantities
out = pd.DataFrame({'START':date[on],'END':date[off-1],'DAYS':np.busday_count(date[on],date[off-1])+1,'MIN':np.minimum.reduceat(val,on)})
# admire
out
# START END DAYS MIN
# 0 2007-06-29 2007-06-29 1 -0.006408
# 1 2007-07-04 2007-07-04 1 -0.000003
# 2 2007-07-10 2007-07-11 2 -0.018858
# 3 2007-07-16 2007-07-17 2 -0.008562
Upvotes: 1
Reputation: 2821
This one has some similar logic to the initial solution (by Allen), but fewer "applies". Not sure about performance comparisons.
# a new group begins when previous value is 0, but current is negative
df['NEW_GROUP'] = df['VAL'].shift(1) == 0
df['NEW_GROUP'] &= df['VAL'] < 0
# Group by the number of times a new group has showed up, which determines the group number.
# Directly return a Series from `apply` to obviate further transformations
print(df.loc[df['VAL'] < 0]
.groupby(df['NEW_GROUP'].cumsum())
.apply(lambda x: pd.Series([x.DATE.iloc[0], x.DATE.iloc[-1], x.VAL.min(), len(x)],
index=['START','END','MIN','DAYS'])))
Output:
START END MIN DAYS
NEW_GROUP
1 2007-06-29 2007-06-29 -0.006408 1
2 2007-07-04 2007-07-04 -0.000003 1
3 2007-07-10 2007-07-11 -0.018858 2
4 2007-07-16 2007-07-17 -0.008562 2
Upvotes: 1
Reputation: 862491
Solution with named-aggregation
working in pandas 0.25+:
#convert DatetimeIndex to column
df = df.reset_index()
#filter values equal 0
m = df['VAL'].eq(0)
#create groups only for non 0 rows filtering with inverting mask by ~
g = m.ne(m.shift()).cumsum()[~m]
#aggregation by groups
df1 = df.groupby(g).agg(START=('DATE','first'),
END=('DATE','last'),
DAYS= ('DATE', 'size'),
MIN=('VAL','min')).reset_index(drop=True)
print (df1)
START END DAYS MIN
0 2007-06-29 2007-06-29 1 -0.006408
1 2007-07-04 2007-07-04 1 -0.000003
2 2007-07-10 2007-07-11 2 -0.018858
3 2007-07-16 2007-07-17 2 -0.008562
Solution for pandas <0.25 is possible with pass dicttionary to agg
and last set new columns names:
df = df.reset_index()
m = df['VAL'].eq(0)
g = m.ne(m.shift()).cumsum()[~m]
df1 = df.groupby(g).agg({'DATE':['first','last','size'], 'VAL':'min'}).reset_index(drop=True)
df1.columns = ['START','END','DAYS','MIN']
print (df1)
START END DAYS MIN
0 2007-06-29 2007-06-29 1 -0.006408
1 2007-07-04 2007-07-04 1 -0.000003
2 2007-07-10 2007-07-11 2 -0.018858
3 2007-07-16 2007-07-17 2 -0.008562
Upvotes: 4
Reputation: 163
you can use this : firs read dataframe from file:
import pandas as pd
df=pd.read_csv("file.csv")
out:
DATE VAL
0 2007-06-26 0.000000
1 2007-06-27 0.000000
2 2007-06-28 0.000000
3 2007-06-29 -0.006408
4 2007-07-02 0.000000
5 2007-07-03 0.000000
6 2007-07-04 -0.000003
7 2007-07-05 0.000000
8 2007-07-06 0.000000
9 2007-07-09 0.000000
10 2007-07-10 -0.018858
11 2007-07-11 -0.015624
12 2007-07-12 0.000000
13 2007-07-13 0.000000
14 2007-07-16 -0.008562
15 2007-07-17 -0.006587
and main code:
from datetime import timedelta
last_date=0
min_val=0
mat=[]
st=0
for index, row in df.iterrows():
if (row['VAL'])!=0:
st=st+1
datetime_object = datetime.strptime(row['DATE'], '%Y-%m-%d')
if st==1:
start=datetime_object
last_date=start
if row['VAL']<min_val:
min_val=row['VAL']
else:
if last_date+timedelta(days=1)==datetime_object:
last_date=datetime_object
if row['VAL']<min_val:
min_val=row['VAL']
else:
arr=[]
arr.append(str(start.date()))
arr.append(str(last_date.date()))
arr.append(((last_date-start).days)+1)
arr.append(min_val)
start=datetime_object
last_date=datetime_object
min_val=row['VAL']
mat.append(arr)
arr=[]
arr.append(str(start.date()))
arr.append(str(last_date.date()))
arr.append(((last_date-start).days)+1)
arr.append(min_val)
mat.append(arr)
df = pd.DataFrame(mat, columns = ['start', 'end', 'days', 'min'])
df
out:
start end days min
0 2007-06-29 2007-06-29 1 -0.006408
1 2007-07-04 2007-07-04 1 -0.000003
2 2007-07-10 2007-07-11 2 -0.018858
3 2007-07-16 2007-07-17 2 -0.008562
Upvotes: 0
Reputation: 19947
First you create a flag to find the non-zero records and assign them in same groups, then groupby and calculate those attributes you want.
(
df.assign(Flag = np.where(df.VAL.ge(0), 1, np.nan))
.assign(Flag = lambda x: x.Flag.fillna(x.Flag.cumsum().ffill()))
.loc[lambda x: x.Flag.ne(1)]
.groupby('Flag')
.apply(lambda x: [x.DATE.iloc[0], x.DATE.iloc[-1], len(x), x.VAL.min()])
.apply(pd.Series)
.set_axis(['START','END','DAYS','MIN'], axis=1, inplace=False)
)
START END DAYS MIN
Flag
3.0 2007-06-29 2007-06-29 1 -0.006408
5.0 2007-07-04 2007-07-04 1 -0.000003
8.0 2007-07-10 2007-07-11 2 -0.018858
10.0 2007-07-16 2007-07-17 2 -0.008562
Upvotes: 3