Reputation: 14604
I wish to count the number of times each combination of two elements appears in the same group.
For example, with:
> dat = data.table(group = c(1,1,1,2,2,2,3,3), id=c(10,11,12,10,11,13,11,13))
> dat
group id
1: 1 10
2: 1 11
3: 1 12
4: 2 10
5: 2 11
6: 2 13
7: 3 11
8: 3 13
The expected result would be:
id.1 id.2 nb_common_appearances
10 11 2 (in group 1 and 2)
10 12 1 (in group 1)
11 12 1 (in group 1)
10 13 1 (in group 2)
11 13 2 (in group 2 and 3)
Upvotes: 9
Views: 851
Reputation: 66819
Here is a data.table
approach (roughly the same as @josilber's from plyr
):
pairs <- dat[, c(id=split(combn(id,2),1:2)), by=group ]
pairs[, .N, by=.(id.1,id.2) ]
# id.1 id.2 N
# 1: 10 11 2
# 2: 10 12 1
# 3: 11 12 1
# 4: 10 13 1
# 5: 11 13 2
You might also consider viewing the results in a table
:
pairs[, table(id.1,id.2) ]
# id.2
# id.1 11 12 13
# 10 2 1 1
# 11 0 1 2
You can use merges instead of combn
:
setkey(dat, group)
dat[ dat, allow.cartesian=TRUE ][ id<i.id, .N, by=.(id,i.id) ]
Benchmarks. For large data, the merges can be a little faster (as hypothesized by @DavidArenburg). @Arun's answer is faster still:
DT <- data.table(g=1,id=1:(1.5e3),key="id")
system.time({a <- combn(DT$id,2)})
# user system elapsed
# 0.81 0.00 0.81
system.time({b <- DT[DT,allow.cartesian=TRUE][id<i.id]})
# user system elapsed
# 0.13 0.00 0.12
system.time({d <- DT[,.(rep(id,(.N-1L):0L),id[indices(.N-1L)])]})
# user system elapsed
# 0.01 0.00 0.02
(I left out the group-by operation as I don't think it will be important to the timings.)
In defense of combn. The combn
approach extends nicely to larger combos, while merges and @Arun's answer, while much faster for pairs, do not (as far as I can see):
DT2 <- data.table(g=rep(1:2,each=5),id=1:5)
tuple_size <- 4
tuples <- DT2[, c(id=split(combn(id,tuple_size),1:tuple_size)), by=g ]
tuples[, .N, by=setdiff(names(tuples),"g")]
# id.1 id.2 id.3 id.4 N
# 1: 1 2 3 4 2
# 2: 1 2 3 5 2
# 3: 1 2 4 5 2
# 4: 1 3 4 5 2
# 5: 2 3 4 5 2
Upvotes: 10
Reputation: 24945
Here is a dplyr
approach, using combn
to make the combinations.
dat %>%
group_by(group) %>%
do(as.data.frame(t(combn(.[["id"]], 2)))) %>%
group_by(V1, V2) %>%
summarise(n( ))
Source: local data frame [5 x 3]
Groups: V1
V1 V2 n()
1 10 11 2
2 10 12 1
3 10 13 1
4 11 12 1
5 11 13 2
Upvotes: 2
Reputation: 118779
Another way using data.table
:
require(data.table)
indices <- function(n) sequence(n:1L) + rep(1:n, n:1)
dat[, .(id1 = rep(id, (.N-1L):0L),
id2 = id[indices(.N-1L)]),
by=group
][, .N, by=.(id1, id2)]
# id1 id2 N
# 1: 10 11 2
# 2: 10 12 1
# 3: 11 12 1
# 4: 10 13 1
# 5: 11 13 2
Upvotes: 7
Reputation: 44299
You could reshape your data to have each pair in each group in a separate row (I've used split-apply-combine for that step) and then use count
from the plyr
package to count the frequency of unique rows:
library(plyr)
count(do.call(rbind, lapply(split(dat, dat$group), function(x) t(combn(x$id, 2)))))
# x.1 x.2 freq
# 1 10 11 2
# 2 10 12 1
# 3 10 13 1
# 4 11 12 1
# 5 11 13 2
Upvotes: 6