Hussain
Hussain

Reputation: 5177

Use pandas to group data daily and weekly based on epoch time

I have data like,

[
  {
    "arrivalDate": 1493611200000,
    "price": 4588
  },
  {
    "arrivalDate": 1493352000000,
    "price": 4630
  },
  {
    "arrivalDate": 1493179200000,
    "price": 4553
  },
  {
    "arrivalDate": 1493092800000,
    "price": 4530
  },
  {
    "arrivalDate": 1493006400000,
    "price": 4578
  },
  {
    "arrivalDate": 1490572800000,
    "price": 4457
  }
]

and I want to aggregate and show graphs for daily prices and weekly average prices.

The daily stats is easy but I am not able to figure out the weekly average. Please note that my epoch is in milliseconds.

I have tried,

def get_daily_weekly_and_monthly_aggregates(datewise_prices):
    p_dataframe = pandas.DataFrame.from_records(datewise_prices)

    daily = p_dataframe.sort_values(by=['arrivalDate'], ascending=[False])[:6]  # for 7 days
    # TODO - This should be configurable. Not hardcoded.
    daily['label'] = daily['arrivalDate'].map(lambda x: pytz.timezone('Asia/Kolkata').localize(datetime.fromtimestamp(x/1000)).strftime("%d %b"))
    daily_list = daily.to_dict('records')

    pandas.to_datetime(p_dataframe['arrivalDate'], unit='ms')
    p_dataframe.set_index(pandas.DatetimeIndex(p_dataframe["arrivalDate"]), inplace=True)
    weekly = p_dataframe.groupby(pandas.TimeGrouper("W")).agg(lambda grp: list(grp))
    weekly.mean()
    weekly_list = weekly.to_dict('records')

    return daily_list, weekly_list

I get weekly_list as,

[
  {
    "arrivalDate": [
      1490054400000,
      1490572800000,
      1493006400000,
      1493092800000,
      1493179200000,
      1493352000000,
      1493611200000
    ],
    "modalPrice": [
      4357,
      4457,
      4578,
      4530,
      4553,
      4630,
      4588
    ]
  }
]

which is not desired.

What I actually need is,

[
    {
        "week": "1 May to 7 May",
        "avg": 4588
    },
    {
        "week": "24 Apr to 30 Apr",
        "avg": 4572
    },
    {
        "week": "27 Mar to 02 Apr",
        "avg": 4457
    }
] 

How do I do this?

Upvotes: 3

Views: 1747

Answers (2)

jezrael
jezrael

Reputation: 862581

You can use to_datetime first and then resample with aggregating sum, last if need remove NaN add dropna:

#with borrowing data from piRSquared answer
df['arrivalDate'] = pd.to_datetime(df.arrivalDate, unit='ms')
df1 = df.resample('W', on='arrivalDate')['price'].sum().dropna().reset_index()
print (df1)
  arrivalDate    price
0  2017-04-02   4457.0
1  2017-04-30  18291.0
2  2017-05-07   4588.0

Similar solution without parameter on in resample, instead use set_index for DatetimeIndex:

df['arrivalDate'] = pd.to_datetime(df.arrivalDate, unit='ms')
df1 = df.set_index('arrivalDate').resample('W')['price'].sum().dropna().reset_index()
print (df1)
  arrivalDate    price
0  2017-04-02   4457.0
1  2017-04-30  18291.0
2  2017-05-07   4588.0

Then convert ranges of datetimes with substracting of 6 days and last create list:

df1['arrivalDate'] = (df1['arrivalDate']-pd.offsets.DateOffset(days=6)).dt.strftime('%d %b')+
                      ' to ' + 
                      df1['arrivalDate'].dt.strftime('%d %b')

#if prices are always int
df1['price'] = df1['price'].astype(int)
print (df1)
        arrivalDate  price
0  27 Mar to 02 Apr   4457
1  24 Apr to 30 Apr  18291
2  01 May to 07 May   4588

L = df1.to_dict(orient='record')
print (L)
[{'arrivalDate': '27 Mar to 02 Apr', 'price': 4457}, 
 {'arrivalDate': '24 Apr to 30 Apr', 'price': 18291}, 
 {'arrivalDate': '01 May to 07 May', 'price': 4588}]

EDIT:

#https://bpaste.net/show/909564ed4cb1
data = [
  {
    "price": 3050,
    "arrivalDate": 1489536000000
  },
  {
    "price": 3240,
    "arrivalDate": 1489622400000
  },

...
...

  {
    "price": 3150,
    "arrivalDate": 1493611200000
  },
  {
    "price": 3150,
    "arrivalDate": 1493697600000
  }
]
df = pd.DataFrame(data)

df['arrivalDate'] = pd.to_datetime(df.arrivalDate, unit='ms')
df1 = df.resample('W', on='arrivalDate')['price'].sum().dropna().reset_index()
print (df1)
  arrivalDate    price
0  2017-03-19  12460.0
1  2017-03-26  15930.0
2  2017-04-02  15735.0
3  2017-04-09  18880.0
4  2017-04-30  12640.0
5  2017-05-07   6300.0

df1['arrivalDate'] = (df1['arrivalDate'] - pd.offsets.DateOffset(days=6)).dt.strftime('%d %b') + ' to ' + \
                      df1['arrivalDate'].dt.strftime('%d %b')

#if pries are always int
df1['price'] = df1['price'].astype(int)
print (df1)
        arrivalDate  price
0  13 Mar to 19 Mar  12460
1  20 Mar to 26 Mar  15930
2  27 Mar to 02 Apr  15735
3  03 Apr to 09 Apr  18880
4  24 Apr to 30 Apr  12640
5  01 May to 07 May   6300

L = df1.to_dict(orient='record')
print (L)
[{'arrivalDate': '13 Mar to 19 Mar', 'price': 12460}, 
 {'arrivalDate': '20 Mar to 26 Mar', 'price': 15930}, 
 {'arrivalDate': '27 Mar to 02 Apr', 'price': 15735}, 
 {'arrivalDate': '03 Apr to 09 Apr', 'price': 18880}, 
 {'arrivalDate': '24 Apr to 30 Apr', 'price': 12640}, 
 {'arrivalDate': '01 May to 07 May', 'price': 6300}]

And for check df:

print (df)
           arrivalDate  price
0  2017-03-15 00:00:00   3050
1  2017-03-16 00:00:00   3240
2  2017-03-17 00:00:00   3120
3  2017-03-18 00:00:00   3050
4  2017-03-20 00:00:00   3200
5  2017-03-22 00:00:00   3200
6  2017-03-23 00:00:00   3210
7  2017-03-24 00:00:00   3200
8  2017-03-25 00:00:00   3120
9  2017-03-27 00:00:00   3100
10 2017-03-28 00:00:00   3240
11 2017-03-30 00:00:00   3100
12 2017-03-31 00:00:00   3145
13 2017-04-01 00:00:00   3150
14 2017-04-04 00:00:00   3150
15 2017-04-05 00:00:00   3150
16 2017-04-06 00:00:00   3110
17 2017-04-07 00:00:00   3210
18 2017-04-08 00:00:00   3210
19 2017-04-09 00:00:00   3050
20 2017-04-26 04:00:00   3110
21 2017-04-27 04:00:00   3210
22 2017-04-28 04:00:00   3200
23 2017-04-29 04:00:00   3120
24 2017-05-01 04:00:00   3150
25 2017-05-02 04:00:00   3150

Upvotes: 4

piRSquared
piRSquared

Reputation: 294228

See setup below for creation of the dataframe df

First
You want to convert your 'arrivalDate' column to a datetime column with pd.to_datetime. This will allow you to utilize pandas built in date funcitonality. You notice that I use the unit parameter where I specify that the value coming in are milliseconds from epoch.

df['arrivalDate'] = pd.to_datetime(df['arrivalDate'], unit='ms')

Now we can use pandas groupby with TimeGrouper or resample in order to perform our aggregation.

option 1
pd.TimeGrouper

s = df.set_index(pd.to_datetime(df.arrivalDate, unit='ms')).price
s.groupby(pd.TimeGrouper('W')).sum().dropna()

arrivalDate
2017-04-02     4457.0
2017-04-30    18291.0
2017-05-07     4588.0
Name: price, dtype: float64

option 2
pd.Series.resample

s = df.set_index(pd.to_datetime(df.arrivalDate, unit='ms')).price
s.resample('W').sum().dropna()

arrivalDate
2017-04-02     4457.0
2017-04-30    18291.0
2017-05-07     4588.0
Name: price, dtype: float64

option 3
pd.Series.resample with the on parameter
We can skip the updating of the index by using the on parameter

df['arrivalDate'] = pd.to_datetime(df['arrivalDate'], unit='ms')
df.resample('W', on='arrivalDate').price.sum().dropna()

arrivalDate
2017-04-02     4457.0
2017-04-30    18291.0
2017-05-07     4588.0
Name: price, dtype: float64

timing

%timeit df.resample('W', on='arrivalDate').price.sum().dropna()
1000 loops, best of 3: 1.86 ms per loop

t 
%timeit df.set_index('arrivalDate').price.groupby(pd.TimeGrouper('W')).sum().dropna()
1000 loops, best of 3: 1.72 ms per loop

%timeit 
%timeit df.set_index('arrivalDate').price.resample('W').sum().dropna()
1000 loops, best of 3: 1.86 ms per loop

setup

data = [
  {
    "arrivalDate": 1493611200000,
    "price": 4588
  },
  {
    "arrivalDate": 1493352000000,
    "price": 4630
  },
  {
    "arrivalDate": 1493179200000,
    "price": 4553
  },
  {
    "arrivalDate": 1493092800000,
    "price": 4530
  },
  {
    "arrivalDate": 1493006400000,
    "price": 4578
  },
  {
    "arrivalDate": 1490572800000,
    "price": 4457
  }
]

df = pd.DataFrame(data)

Upvotes: 4

Related Questions