user1913171
user1913171

Reputation: 283

Time Series using numpy or pandas

I'm a beginner of Python related environment and I have problem with using time series data.

The below is my OHLC 1 minute data.

2011-11-01,9:00:00,248.50,248.95,248.20,248.70
2011-11-01,9:01:00,248.70,249.00,248.65,248.85
2011-11-01,9:02:00,248.90,249.25,248.70,249.15
...
2011-11-01,15:03:00,250.25,250.30,250.05,250.15
2011-11-01,15:04:00,250.15,250.60,250.10,250.60
2011-11-01,15:15:00,250.55,250.55,250.55,250.55
2011-11-02,9:00:00,245.55,246.25,245.40,245.80
2011-11-02,9:01:00,245.85,246.40,245.75,246.35
2011-11-02,9:02:00,246.30,246.45,245.75,245.80
2011-11-02,9:03:00,245.75,245.85,245.30,245.35
...
  1. I'd like to extract the last "CLOSE" data per each row and convert data format like the following:

    2011-11-01, 248.70, 248.85, 249.15, ... 250.15, 250.60, 250.55
    2011-11-02, 245.80, 246.35, 245.80, ...
    ...
    
  2. I'd like to calculate the highest Close value and it's time(minute) per EACH DAY like the following:

    2011-11-01, 10:23:03, 250.55
    2011-11-02, 11:02:36, 251.00
    ....
    

Any help would be very appreciated.

Thank you in advance,

Upvotes: 2

Views: 5828

Answers (1)

Viktor Kerkez
Viktor Kerkez

Reputation: 46566

You can use the pandas library. In the case of your data you can get the max as:

import pandas as pd
# Read in the data and parse the first two columns as a
# date-time and set it as index
df = pd.read_csv('your_file', parse_dates=[[0,1]], index_col=0, header=None)
# get only the fifth column (close)
df = df[[5]]
# Resample to date frequency and get the max value for each day.
df.resample('D', how='max')

If you want to show also the times, keep them in your DataFrame as a column and pass a function that will determine the max close value and return that row:

>>> df = pd.read_csv('your_file', parse_dates=[[0,1]], index_col=0, header=None,
                     usecols=[0, 1, 5], names=['d', 't', 'close'])
>>> df['time'] = df.index
>>> df.resample('D', how=lambda group: group.iloc[group['close'].argmax()])
             close                time
d_t                             
2011-11-01  250.60 2011-11-01 15:04:00
2011-11-02  246.35 2011-11-02 09:01:00

And if you wan't a list of the prices per day then just do a groupby per day and return the list of all the prices from every group using the apply on the grouped object:

>>> df.groupby(lambda dt: dt.date()).apply(lambda group: list(group['close']))
2011-11-01    [248.7, 248.85, 249.15, 250.15, 250.6, 250.55]
2011-11-02                    [245.8, 246.35, 245.8, 245.35]

For more information take a look at the docs: Time Series

Update for the concrete data set:

The problem with your data set is that you have some days without any data, so the function passed in as the resampler should handle those cases:

def func(group):
    if len(group) == 0:
        return None
    return group.iloc[group['close'].argmax()]
df.resample('D', how=func).dropna()

Upvotes: 5

Related Questions