nofunsally
nofunsally

Reputation: 2091

Using timestamps for range selection

Hello I have dataframe that is organized like the example below. I have a timestamp, a grouping variable, and several variables with numeric values per timestamp.

# dput of subset of data
structure(list(TIMESTAMP = structure(1:15, .Label = c("1/1/2012 11:00", 
"1/1/2012 12:00", "1/1/2012 13:00", "1/1/2012 14:00", "1/1/2012 15:00", 
"1/2/2012 11:00", "1/2/2012 12:00", "1/2/2012 13:00", "1/2/2012 14:00", 
"1/2/2012 15:00", "4/7/2012 11:00", "4/7/2012 12:00", "4/7/2012 13:00", 
"4/7/2012 14:00", "4/7/2012 15:00"), class = "factor"), P = c(992.4, 
992.4, 992.4, 992.4, 992.4, 992.4, 992.4, 992.4, 992.4, 992.4, 
239, 239, 239, 239, 239), WS = c(4.023, 3.576, 4.023, 6.259, 
4.47, 3.576, 3.576, 2.682, 4.023, 3.576, 2.682, 3.129, 2.682, 
2.235, 2.682), WD = c(212L, 200L, 215L, 213L, 204L, 304L, 276L, 
273L, 307L, 270L, 54L, 24L, 304L, 320L, 321L), AT = c(16.11, 
18.89, 20, 20, 19.44, 10.56, 11.11, 11.67, 12.22, 11.11, 17.22, 
18.33, 19.44, 20.56, 21.11), FT = c(17.22, 22.22, 22.78, 22.78, 
20, 11.11, 15.56, 17.22, 17.78, 15.56, 24.44, 25.56, 29.44, 30.56, 
29.44), H = c(50L, 38L, 38L, 39L, 48L, 24L, 19L, 18L, 16L, 18L, 
23L, 20L, 18L, 17L, 15L), B = c(1029L, 1027L, 1026L, 1024L, 1023L, 
1026L, 1025L, 1024L, 1023L, 1023L, 1034L, 1033L, 1032L, 1031L, 
1030L), FM = c(14.9, 14.4, 14, 13.7, 13.6, 13.1, 12.8, 12.3, 
12, 11.7, 12.8, 12, 11.4, 10.9, 10.4), GD = c(204L, 220L, 227L, 
222L, 216L, 338L, 311L, 326L, 310L, 273L, 62L, 13L, 312L, 272L, 
281L), MG = c(8.047, 9.835, 10.28, 13.41, 11.18, 9.388, 8.941, 
8.494, 9.835, 10.73, 6.706, 7.153, 8.047, 8.047, 7.6), SR = c(522L, 
603L, 604L, 526L, 248L, 569L, 653L, 671L, 616L, 487L, 972L, 1053L, 
1061L, 1002L, 865L), WS2 = c(2.235, 3.576, 4.47, 4.47, 5.364, 
4.023, 2.682, 3.576, 3.576, 4.023, 3.129, 3.129, 3.576, 2.682, 
3.129), WD2 = c(200L, 201L, 206L, 210L, 211L, 319L, 315L, 311L, 
302L, 290L, 49L, 39L, 15L, 348L, 329L)), .Names = c("TIMESTAMP", 
"P", "WS", "WD", "AT", "FT", "H", "B", "FM", "GD", "MG", "SR", 
"WS2", "WD2"), class = "data.frame", row.names = c(NA, -15L))

I am trying to figure out the best way to deal with timestamps for future manipulations. I have read about lubridate (e.g. here), zoo and POSIXt. However, I feel there might be some r/timestamp trickery that I am unaware of that will make working with timestamps easier (i.e. I might not fully understand timestamps).

Ultimately, I want to do something to the effect of creating a new dataframe that consists of the average of all of these values for some range of date or time. For example, the average value of each variable between 12:00 and 16:00 daily.

Are one of the theses three packages better than the other for performing this sort of task? Could you point me to an example or solution that might do the averaging I wrote about above? Or, are these more suited to figuring out times (e.g. number of hours, days, etc between something [e.g. arrivals and departures]) or can they be used for working with timestamps for other dataframe tasks (like averaging)?

Upvotes: 0

Views: 1656

Answers (2)

CHP
CHP

Reputation: 17189

I am updating the answer with the sample data provided. Old answer is still kept intact at end of this post.

Firstly, you need to convert you data frame to xts object.

> data.xts <- as.xts(df[,2:14], as.POSIXct(strptime(df[,1], '%m/%d/%Y %H:%S')))
> data.xts
                        P    WS  WD    AT    FT  H    B   FM  GD     MG   SR   WS2 WD2
2012-01-01 11:00:00 992.4 4.023 212 16.11 17.22 50 1029 14.9 204  8.047  522 2.235 200
2012-01-01 12:00:00 992.4 3.576 200 18.89 22.22 38 1027 14.4 220  9.835  603 3.576 201
2012-01-01 13:00:00 992.4 4.023 215 20.00 22.78 38 1026 14.0 227 10.280  604 4.470 206
2012-01-01 14:00:00 992.4 6.259 213 20.00 22.78 39 1024 13.7 222 13.410  526 4.470 210
2012-01-01 15:00:00 992.4 4.470 204 19.44 20.00 48 1023 13.6 216 11.180  248 5.364 211
2012-01-02 11:00:00 992.4 3.576 304 10.56 11.11 24 1026 13.1 338  9.388  569 4.023 319
2012-01-02 12:00:00 992.4 3.576 276 11.11 15.56 19 1025 12.8 311  8.941  653 2.682 315
2012-01-02 13:00:00 992.4 2.682 273 11.67 17.22 18 1024 12.3 326  8.494  671 3.576 311
2012-01-02 14:00:00 992.4 4.023 307 12.22 17.78 16 1023 12.0 310  9.835  616 3.576 302
2012-01-02 15:00:00 992.4 3.576 270 11.11 15.56 18 1023 11.7 273 10.730  487 4.023 290
2012-04-07 11:00:00 239.0 2.682  54 17.22 24.44 23 1034 12.8  62  6.706  972 3.129  49
2012-04-07 12:00:00 239.0 3.129  24 18.33 25.56 20 1033 12.0  13  7.153 1053 3.129  39
2012-04-07 13:00:00 239.0 2.682 304 19.44 29.44 18 1032 11.4 312  8.047 1061 3.576  15
2012-04-07 14:00:00 239.0 2.235 320 20.56 30.56 17 1031 10.9 272  8.047 1002 2.682 348
2012-04-07 15:00:00 239.0 2.682 321 21.11 29.44 15 1030 10.4 281  7.600  865 3.129 329
> data.xts['T12:00:00/T16:00:00']
                        P    WS  WD    AT    FT  H    B   FM  GD     MG   SR   WS2 WD2
2012-01-01 12:00:00 992.4 3.576 200 18.89 22.22 38 1027 14.4 220  9.835  603 3.576 201
2012-01-01 13:00:00 992.4 4.023 215 20.00 22.78 38 1026 14.0 227 10.280  604 4.470 206
2012-01-01 14:00:00 992.4 6.259 213 20.00 22.78 39 1024 13.7 222 13.410  526 4.470 210
2012-01-01 15:00:00 992.4 4.470 204 19.44 20.00 48 1023 13.6 216 11.180  248 5.364 211
2012-01-02 12:00:00 992.4 3.576 276 11.11 15.56 19 1025 12.8 311  8.941  653 2.682 315
2012-01-02 13:00:00 992.4 2.682 273 11.67 17.22 18 1024 12.3 326  8.494  671 3.576 311
2012-01-02 14:00:00 992.4 4.023 307 12.22 17.78 16 1023 12.0 310  9.835  616 3.576 302
2012-01-02 15:00:00 992.4 3.576 270 11.11 15.56 18 1023 11.7 273 10.730  487 4.023 290
2012-04-07 12:00:00 239.0 3.129  24 18.33 25.56 20 1033 12.0  13  7.153 1053 3.129  39
2012-04-07 13:00:00 239.0 2.682 304 19.44 29.44 18 1032 11.4 312  8.047 1061 3.576  15
2012-04-07 14:00:00 239.0 2.235 320 20.56 30.56 17 1031 10.9 272  8.047 1002 2.682 348
2012-04-07 15:00:00 239.0 2.682 321 21.11 29.44 15 1030 10.4 281  7.600  865 3.129 329

Now you can use period.apply as shown in old answer below.

Old answer

You can consider "xts" package for this purpose.

I will give you brief example in lines of what you asked for.

Suppose you have timeseries as below in xts format

> head(EURUSD);tail(EURUSD)
                       Open    High     Low   Close
2009-05-01 00:10:00 1.32436 1.32600 1.32436 1.32587
2009-05-01 00:20:00 1.32589 1.32597 1.32430 1.32431
2009-05-01 00:30:00 1.32441 1.32543 1.32432 1.32479
2009-05-01 00:40:00 1.32484 1.32554 1.32482 1.32543
2009-05-01 00:50:00 1.32551 1.32610 1.32532 1.32538
2009-05-01 01:00:00 1.32538 1.32618 1.32462 1.32462
                       Open    High     Low   Close
2009-05-31 23:10:00 1.41175 1.41281 1.41129 1.41262
2009-05-31 23:20:00 1.41258 1.41259 1.41205 1.41215
2009-05-31 23:30:00 1.41206 1.41210 1.41128 1.41132
2009-05-31 23:40:00 1.41132 1.41147 1.41062 1.41093
2009-05-31 23:50:00 1.41102 1.41102 1.41032 1.41077
2009-06-01 00:00:00 1.41077 1.41099 1.41002 1.41052

You can then filter the data by time index as follows

> EURUSDfiltered <- EURUSD['T12:00:00/T16:00:00']

> tail(EURUSDfiltered,60)
                       Open    High     Low   Close
2009-05-27 14:30:00 1.39063 1.39121 1.38873 1.39094
2009-05-27 14:40:00 1.39098 1.39120 1.38863 1.39075
2009-05-27 14:50:00 1.39079 1.39107 1.38935 1.39020
2009-05-27 15:00:00 1.39016 1.39343 1.38986 1.39286
2009-05-27 15:10:00 1.39286 1.39293 1.38711 1.38898
2009-05-27 15:20:00 1.38898 1.38961 1.38744 1.38824
2009-05-27 15:30:00 1.38824 1.39157 1.38814 1.39148
2009-05-27 15:40:00 1.39145 1.39281 1.39064 1.39248
2009-05-27 15:50:00 1.39245 1.39276 1.39123 1.39143
2009-05-27 16:00:00 1.39145 1.39251 1.39140 1.39231
2009-05-28 12:00:00 1.38708 1.38715 1.38524 1.38565
2009-05-28 12:10:00 1.38563 1.38633 1.38540 1.38594
2009-05-28 12:20:00 1.38596 1.38750 1.38528 1.38691
2009-05-28 12:30:00 1.38691 1.38754 1.38646 1.38710
2009-05-28 12:40:00 1.38721 1.38976 1.38668 1.38910
2009-05-28 12:50:00 1.38913 1.38962 1.38761 1.38775
2009-05-28 13:00:00 1.38777 1.38811 1.38629 1.38680
....
2009-05-28 15:30:00 1.39660 1.39691 1.39584 1.39643
2009-05-28 15:40:00 1.39646 1.39802 1.39616 1.39643
2009-05-28 15:50:00 1.39643 1.39704 1.39574 1.39668
2009-05-28 16:00:00 1.39666 1.39684 1.39423 1.39467
2009-05-29 12:00:00 1.41076 1.41076 1.40890 1.40967
2009-05-29 12:10:00 1.40965 1.41010 1.40870 1.40874
2009-05-29 12:20:00 1.40874 1.41062 1.40870 1.41010
2009-05-29 12:30:00 1.41008 1.41013 1.40844 1.40940
2009-05-29 12:40:00 1.40933 1.41140 1.40886 1.40985
2009-05-29 12:50:00 1.40985 1.41075 1.40887 1.41073
....

Once you have filtered data, you can do calculation of some aggregate function using preiod.apply with help of endpoints

> ep <- endpoints(EURUSDfiltered, on='days')
> aggValues <- period.apply(EURUSDfiltered, INDEX=ep, FUN=mean)
> aggValues
                        Open     High      Low    Close
2009-05-01 16:00:00 1.326569 1.327338 1.325839 1.326445
2009-05-04 16:00:00 1.329267 1.330415 1.328654 1.329759
2009-05-05 16:00:00 1.338648 1.339428 1.337636 1.338623
2009-05-06 16:00:00 1.331870 1.332957 1.330978 1.331909
2009-05-07 16:00:00 1.339542 1.341126 1.337957 1.339760
2009-05-08 16:00:00 1.347692 1.348982 1.346786 1.347995
2009-05-11 16:00:00 1.359852 1.360683 1.359177 1.359987
2009-05-12 16:00:00 1.365657 1.366473 1.364534 1.365473
2009-05-13 16:00:00 1.360978 1.361865 1.359939 1.360888
2009-05-14 16:00:00 1.358187 1.359207 1.357512 1.358386
2009-05-15 16:00:00 1.356786 1.357672 1.355668 1.356690
2009-05-18 16:00:00 1.349660 1.350412 1.349085 1.349679
2009-05-19 16:00:00 1.360091 1.360750 1.359121 1.360065
2009-05-20 16:00:00 1.373703 1.374888 1.373062 1.373990
2009-05-22 16:00:00 1.399224 1.400354 1.398262 1.399429
2009-05-25 16:00:00 1.399991 1.400309 1.399607 1.399976
2009-05-26 16:00:00 1.393970 1.395064 1.393425 1.394333
2009-05-27 16:00:00 1.392505 1.393589 1.391215 1.392552
2009-05-28 16:00:00 1.391658 1.392870 1.390735 1.391952
2009-05-29 16:00:00 1.411398 1.412516 1.410404 1.411468

UPDATE: In response to comment below

further study of ?.subset.xts reveals that When a raw character vector is used for the i subset argument, it is processed as if it was ISO-8601 compliant. and http://en.wikipedia.org/wiki/ISO_8601 mentions T prefix being used for designating time

Upvotes: 4

Dinre
Dinre

Reputation: 4216

I think the process that may help you most is to mutate your [TIMESTAMP] data into a grouping variable. Then, I recommend using one of the many data summary packages to create the report. My personal preference is to use the 'plyr' package for both tasks, and I use it in this example.

Step 1: Use the 'as.POSIXct' function to convert your timestamp data to POSIX datetimes for use with the various datetime functions. Use no parameters to simply convert the data without any adjustments.

data$TIMESTAMP <- as.POSIXct(data$TIMESTAMP)

Update: Since the time is in not in the unambiguous, decreasing-order format (i.e. YYYY/MM/DD HH:MM:SS), the 'as.POSIXct' function won't be able to do a quick conversion of the data. Use 'as.POSIXct' only when you are using the unambiguous format. For other arrangements, use the 'strptime' function, specifying the current format like so:

data$TIMESTAMP <- strptime(data$TIMESTAMP, "%m/%d/%Y %H:%M")

This tells the 'strptime' function what format is currently in use, and exports a POSIX-compatible datetime. There should not be any need to use the 'as.character' function, unless your current data is not a string.

Step 2: Use 'plyr' function 'ddply' (takes dataframe and returns dataframe) to create a new variable for use in the groupings. Use the 'format' function to extract the data you want from the TIMESTAMP values. Look at the 'format' documentation for available formats. In this case, here is how you would create a [MONTH] variable:

library(plyr)
data <- ddply(data, .(TIMESTAMP), mutate, MONTH=format(TIMESTAMP, "%m")

Step 3: Use 'plyr' function 'ddply' to summarize the data by your new variable.

ddply(data, .(MONTH), summarize, V1_AVG=mean(V2), V2_AVG=mean(V2))

If you wanted to also summarize by a second variable (like [GROUP]), just include that in the second function variable, like so:

ddply(data, .(MONTH, GROUP), summarize, V1_AVG=mean(V2), V2_AVG=mean(V2))

Technically, you could do this all in one statement, but experience has taught me caution. I recommend doing each step by itself to make sure nothing gets messed up.

You can parse your data however you like by fiddling around like this just as long as your timestamps have been converted to POSIX datetimes. The 'plyr' package is extremely flexible for stuff like this.

Update: As per the OP's request, I am including how you would do the same calculation but using only data between the hours of 12p and 4p. You don't actually have to use any particular package to subset your data like this, since it's a straight data filter. Just change the data set inputted into the 'ddply' function like so:

# Use one of the following lines, which both do the same thing.
# I'm just including both as different examples of logic that can be used.
data_Subset <- data[format(data$TIMESTAMP, "%H") >= 12 & format(data$TIMESTAMP, "%H") < 16,]
data_Subset <- data[format(data$TIMESTAMP, "%H") %in% 12:15,]

# Then summarize using the new data frame as an input
ddply(data_Subset, .(MONTH, GROUP), summarize, V1_AVG=mean(V2), V2_AVG=mean(V2))

Here, we are filtering the data frame to only show rows (with all columns) where the hour (%H) is equal to 12 through 15. This effectively includes all times from 12:00 to 15:59. If you start getting into very large data sets, you may have to look for other solutions (like the 'data.table' package), but otherwise, this is your fastest option.

Again, this only works, because we have transformed our datetimes into POSIX-compatible datetimes.

Upvotes: 1

Related Questions