Reputation: 85
I have two sets of csv, both with different time frequencies throughout- i.e Measurment every 5 minutes, then every hour etc.
What I want to do is for the second csv (column 2
) if there is a value greater than 190
anywhere in that hour
, then get rid of CSV one's respective hour
Is there a magical way using Pandas to do that? I was thinking of setting the condition to true
and false
as an index then timesing the first CSV data by that. but I thought for this, they would need to be the exact same data intervals.
CSV1 has data of kind (Date,A,B,C,D,E,F,G,H):
24-jan-08 23:50, -8.6, 7.7, 0.0213, .9820, 0.0213, 1.6316, 1.00,46.810
24-jan-08 23:55, -6.7, 7.7, 0.0213, .9824, 0.0213, 1.6321, 1.00,46.802
25-jan-08 00:00, -1.7, 7.7, 0.0213, .9828, 0.0213, 1.6328, 1.00,46.799
25-jan-08 00:05, -32, 7.7, 0.0213, .9835, 0.0213, 1.6334, 1.00,46.757
25-jan-08 00:10, -11.1, 7.7, 0.0213, .9842, 0.0213, 1.6342, 1.00,46.742
etc but as mentioned goes from 5 minutely to hourly later but CSV file is too big to post on here
CSV2 has data the kind (Date,A,B):
2008-01-24 23:50,6.55,186.9
2008-01-24 23:51,6.84,188.6
2008-01-24 23:52,7.14,188.1
2008-01-24 23:53,7.12,189.9
2008-01-24 23:54,7.45,188.6
2008-01-24 23:55,7.52,190.5
2008-01-24 23:56,7.29,189.5
2008-01-24 23:57,7.07,192.4
2008-01-24 23:58,7.33,193.7
2008-01-24 23:59,7.25,192.6
2008-01-25 00:02,6.52,191
2008-01-25 00:03,6.58,189
2008-01-25 00:04,6.43,190.5
2008-01-25 00:05,6.6,188.3
2008-01-25 00:06,6.52,188.7
2008-01-25 00:07,6.75,188.9
2008-01-25 00:08,6.62,188.9
2008-01-25 00:09,6.26,188.8
2008-01-25 00:10,6.6,193.2
The 190 is entirely arbitary need to pick a number appropiate to full dataset
Upvotes: 1
Views: 122
Reputation: 863751
Setup double read_csv
:
import pandas as pd
import io
temp=u"""24-jan-08 23:50,-8.6,7.7,0.0213,.9820,0.0213,1.6316,1.00,46.810
24-jan-08 23:55,-6.7,7.7,0.0213,.9824,0.0213,1.6321,1.00,46.802
25-jan-08 00:00,-1.7,7.7,0.0213,.9828,0.0213,1.6328,1.00,46.799
25-jan-08 00:05,-32,7.7,0.0213,.9835,0.0213,1.6334,1.00,46.757
25-jan-08 00:10,-11.1,7.7,0.0213,.9842,0.0213,1.6342,1.00,46.742"""
#after testing replace io.StringIO(temp) to filename
df1 = pd.read_csv(io.StringIO(temp), parse_dates=[0], names=['Date','A','B','C','D','E','F','G', 'H'])
temp=u"""
2008-01-24 23:50,6.55,186.9
2008-01-24 23:51,6.84,188.6
2008-01-24 23:52,7.14,188.1
2008-01-24 23:53,7.12,189.9
2008-01-24 23:54,7.45,188.6
2008-01-24 23:55,7.52,190.5
2008-01-24 23:56,7.29,189.5
2008-01-24 23:57,7.07,192.4
2008-01-24 23:58,7.33,193.7
2008-01-24 23:59,7.25,192.6
2008-01-25 00:02,6.52,191
2008-01-25 00:03,6.58,189
2008-01-25 00:04,6.43,190.5
2008-01-25 00:05,6.6,188.3
2008-01-25 00:06,6.52,188.7
2008-01-25 00:07,6.75,188.9
2008-01-25 00:08,6.62,188.9
2008-01-25 00:09,6.26,188.8
2008-01-25 00:10,6.6,193.2"""
#after testing replace io.StringIO(temp) to filename
df2 = pd.read_csv(io.StringIO(temp), parse_dates=[0],names=['Date','A','B'])
print (df1)
Date A B C D E F G H
0 2008-01-24 23:50:00 -8.6 7.7 0.0213 0.9820 0.0213 1.6316 1.0 46.810
1 2008-01-24 23:55:00 -6.7 7.7 0.0213 0.9824 0.0213 1.6321 1.0 46.802
2 2008-01-25 00:00:00 -1.7 7.7 0.0213 0.9828 0.0213 1.6328 1.0 46.799
3 2008-01-25 00:05:00 -32.0 7.7 0.0213 0.9835 0.0213 1.6334 1.0 46.757
4 2008-01-25 00:10:00 -11.1 7.7 0.0213 0.9842 0.0213 1.6342 1.0 46.742
print (df2)
Date A B
0 2008-01-24 23:50:00 6.55 186.9
1 2008-01-24 23:51:00 6.84 188.6
2 2008-01-24 23:52:00 7.14 188.1
3 2008-01-24 23:53:00 7.12 189.9
4 2008-01-24 23:54:00 7.45 188.6
5 2008-01-24 23:55:00 7.52 190.5
6 2008-01-24 23:56:00 7.29 189.5
7 2008-01-24 23:57:00 7.07 192.4
8 2008-01-24 23:58:00 7.33 193.7
9 2008-01-24 23:59:00 7.25 192.6
10 2008-01-25 00:02:00 6.52 191.0
11 2008-01-25 00:03:00 6.58 189.0
12 2008-01-25 00:04:00 6.43 190.5
13 2008-01-25 00:05:00 6.60 188.3
14 2008-01-25 00:06:00 6.52 188.7
15 2008-01-25 00:07:00 6.75 188.9
16 2008-01-25 00:08:00 6.62 188.9
17 2008-01-25 00:09:00 6.26 188.8
18 2008-01-25 00:10:00 6.60 193.2
You can first convert columns Date
to_period
:
df1.index = df1['Date'].dt.to_period('h')
df2['per'] = df2['Date'].dt.to_period('h')
print (df1)
Date A B C D E \
Date
2008-01-24 23:00 2008-01-24 23:50:00 -8.6 7.7 0.0213 0.9820 0.0213
2008-01-24 23:00 2008-01-24 23:55:00 -6.7 7.7 0.0213 0.9824 0.0213
2008-01-25 00:00 2008-01-25 00:00:00 -1.7 7.7 0.0213 0.9828 0.0213
2008-01-25 00:00 2008-01-25 00:05:00 -32.0 7.7 0.0213 0.9835 0.0213
2008-01-25 00:00 2008-01-25 00:10:00 -11.1 7.7 ;0.0213 0.9842 0.0213
F G H
Date
2008-01-24 23:00 1.6316 1.0 46.810
2008-01-24 23:00 1.6321 1.0 46.802
2008-01-25 00:00 1.6328 1.0 46.799
2008-01-25 00:00 1.6334 1.0 46.757
2008-01-25 00:00 1.6342 1.0 46.742
print (df2)
Date A B per
0 2008-01-24 23:50:00 6.55 186.9 2008-01-24 23:00
1 2008-01-24 23:51:00 6.84 188.6 2008-01-24 23:00
2 2008-01-24 23:52:00 7.14 188.1 2008-01-24 23:00
3 2008-01-24 23:53:00 7.12 189.9 2008-01-24 23:00
4 2008-01-24 23:54:00 7.45 188.6 2008-01-24 23:00
5 2008-01-24 23:55:00 7.52 190.5 2008-01-24 23:00
6 2008-01-24 23:56:00 7.29 189.5 2008-01-24 23:00
7 2008-01-24 23:57:00 7.07 192.4 2008-01-24 23:00
8 2008-01-24 23:58:00 7.33 193.7 2008-01-24 23:00
9 2008-01-24 23:59:00 7.25 192.6 2008-01-24 23:00
10 2008-01-25 00:02:00 6.52 191.0 2008-01-25 00:00
11 2008-01-25 00:03:00 6.58 189.0 2008-01-25 00:00
12 2008-01-25 00:04:00 6.43 190.5 2008-01-25 00:00
13 2008-01-25 00:05:00 6.60 188.3 2008-01-25 00:00
14 2008-01-25 00:06:00 6.52 188.7 2008-01-25 00:00
15 2008-01-25 00:07:00 6.75 188.9 2008-01-25 00:00
16 2008-01-25 00:08:00 6.62 188.9 2008-01-25 00:00
17 2008-01-25 00:09:00 6.26 188.8 2008-01-25 00:00
18 2008-01-25 00:10:00 6.60 193.2 2008-01-25 00:00
Then found unique
periods
by condition:
pers = df2.loc[df2.B > 190, 'per'].unique()
print (pers)
[Period('2008-01-24 23:00', 'H') Period('2008-01-25 00:00', 'H')]
Last drop
all rows in df1
:
print (df1.drop(pers))
Empty DataFrame
Columns: [Date, A, B, C, D, E, F, G]
Index: []
EDIT by comment:
If df1
and df2
have DatetimeIndex
use:
df1.index = df1.index.to_period('h')
df2['per'] = df2.index.to_period('h')
Upvotes: 1