Reputation: 223
I am often trying to measure percentage changes under two distinct scenarios/test/period.
An example dataset:
library(dplyr)
set.seed(11)
toy_dat <- data.frame(state = sample(state.name,3, replace=F),
experiment=c('control','measure'),
accuracy=sample(30:50, size=6, replace=T),
speed=sample(21:39, size=6, replace=T)) %>% arrange(state)
state experiment accuracy speed
1 Alabama measure 31 24
2 Alabama control 36 37
3 Indiana control 30 23
4 Indiana measure 31 38
5 Missouri control 50 29
6 Missouri measure 48 34
I then resort to writing something horrible like this:
result <- toy_dat %>% group_by(state) %>% arrange(experiment) %>%
summarise(acc_delta = (accuracy[2]-accuracy[1])/accuracy[1],
speed_delta = (speed[2]-speed[1])/speed[1])
However, the above solution does not scale at all when the number of measurable begins to grow. In addition, the code is very fragile in terms of the ordering.
I am very new to R. I was hoping that this is a common enough pattern that there are well-known (smarter) solutions to the problem.
I would greatly appreciate any help/pointers.
Upvotes: 1
Views: 223
Reputation: 92282
Just create your own custom function and use summarise_each
in order to apply it on all the measurements at once (it doesn't matter how many measurements you have)
delta_fun <- function(x) diff(x)/x[1L]
toy_dat %>%
group_by(state) %>%
arrange(experiment) %>%
summarise_each(funs(delta_fun), -experiment)
# Source: local data frame [3 x 3]
#
# state accuracy speed
# 1 Alabama -0.13888889 -0.3513514
# 2 Indiana 0.03333333 0.6521739
# 3 Missouri -0.04000000 0.1724138
As you mentioned that you are new to R, here's another awesome package you can use in order to achieve the same effect
library(data.table)
setDT(toy_dat)[order(experiment),
lapply(.SD, delta_fun),
.SDcols = -"experiment",
by = state]
# state accuracy speed
# 1: Alabama -0.13888889 -0.3513514
# 2: Indiana 0.03333333 0.6521739
# 3: Missouri -0.04000000 0.1724138
Upvotes: 1