Reputation: 973
I have a dataframe that contains several months of minutely data of a security. Data example:
TICKER PER DATE TIME OPEN HIGH LOW CLOSE VOL OPENINT
1 RIM4 1 20/03/13 10:55 140000 140000 140000 140000 2 0
2 RIM4 1 20/03/13 12:19 140000 140000 140000 140000 32 0
3 RIM4 1 20/03/13 14:30 140000 140000 140000 140000 1 0
DT dateTime date
1 20/03/13 10:55 2013-03-20 10:55:00 2013-03-20
2 20/03/13 12:19 2013-03-20 12:19:00 2013-03-20
3 20/03/13 14:30 2013-03-20 14:30:00 2013-03-20
I wanted to calculate mean and std.dev for every day. So I developed this script:
#progress bar
pbb <- winProgressBar(title="Example progress bar", label="0% done", min=0, max=100, initial=0)
#subsetting
mean <- rep(NA,rim4$date[length(rim4$date)]-rim4$date[1])
std.dev <- rep(NA,rim4$date[length(rim4$date)]-rim4$date[1])
daily <- data.frame(mean,std.dev)
names(daily) <- c("mean","std.dev")
daily$date <- NA
daily$date <- rim4$date[1]
for (i in 2:length(daily$date)) {daily$date[i] <- daily$date[i-1]+86400}
for (i in 1:length(daily$date)) {
subRim4 <- rim4[format(rim4$date, "%Y-%m-%d") == daily$date[i],5]
daily$mean[i] <- mean(subRim4, na.rm = TRUE)
daily$std.dev[i] <- sd(subRim4, na.rm = TRUE)
setWinProgressBar(pbb, i/(length(daily$date))*100)
}
close(pbb)
It works but too slow. It takes approximatelly 5 min for my machine to manage 1 year of data. It's a bit irritating. Does R have ways to do this faster? And by the way how do I update % value in progress bar? It always show "0%" in my case? Thank everybody in advance.
Upvotes: 1
Views: 97
Reputation: 52647
data.table
is pretty fast for this type of operation. Try (not tested since your data isn't really reproducible for the problem):
library(data.table)
data.table(rim4)[, list(mean(OPEN), sd(OPEN)), by=DATE]
Alternatively, you can use dplyr
:
library(dplyr)
rim4 %.% group_by(DATE) %.% summarise(mean(OPEN), sd(OPEN))
Hopefully this will be fast enough you don't need a progress bar. Also (out of curiosity after Joshua posted about xts), some benchmarks:
library(microbenchmark)
microbenchmark(
rim4.dt[, list(mean(OPEN), sd(OPEN)), by=DATE],
rim4 %.% group_by(DATE) %.% summarise(mean(OPEN), sd(OPEN)),
apply.daily(x, function(d) c(mean=mean(d), sd=sd(d))),
times=5
)
# Unit: milliseconds
# expr min lq median
# rim4.dt[, list(mean(OPEN), sd(OPEN)), by = DATE] 45.15606 48.94687 53.58172
# rim4 %.% group_by(DATE) %.% summarise(mean(OPEN), sd(OPEN)) 50.42578 50.60166 50.78464
# apply.daily(x, function(d) c(mean = mean(d), sd = sd(d))) 589.46499 592.49094 596.35476
I'm not sure if this is representative. Here is the data:
library(xts)
library(data.table)
library(dplyr)
set.seed(21)
size <- 1e6
val <- rnorm(size)
times <- seq(as.POSIXct("2014-03-28"), by="-1 min", length.out=size)
x <- xts(val, times)
rim4 <- data.frame(DATETIME=times, OPEN=val)
rim4$DATE <- format(rim4$DATETIME, "%Y-%m-%d")
rim4.dt <- data.table(rim4)
Upvotes: 1
Reputation: 176678
Use an actual time-series class (like xts) and many things can be done a lot easier:
library(xts)
set.seed(21)
x <- xts(rnorm(1e6), seq(Sys.time(), by="-1 min", length.out=1e6))
y <- apply.daily(x, function(d) c(mean=mean(d), sd=sd(d)))
Upvotes: 1