JRF1111
JRF1111

Reputation: 182

Count number of times two values co-occur within a group in R

I've searched for answers to this question and found similar ones (Count number of rows within each group, count unique combinations of variable values in an R dataframe column, R count occurrences of an element by groups) but none of them address my particular issue.

I have a dataframe with variables year, ID, and code. Every person has an ID and can have multiple code values over the course of (potentially) multiple years.

df = data.frame(ID   = c(1,1,1,1, 2,2,2, 3, 4,4,4,4,4,4,4,4, 5,5,5),
                year = c(2018, 2018, 2020, 2020,
                         2020, 2020, 2020,
                         2011,
                         2019, 2019, 2019, 2019, 2020, 2020, 2020, 2020,
                         2018, 2019, 2020),
                code = c("A", "B", "C", "D",
                         "A", "B", "Q",
                         "G",
                         "A", "B", "Q", "G", "C", "D", "T", "S",
                         "S", "Z", "F")

)

df
   ID year code
1   1 2018    A
2   1 2018    B
3   1 2020    C
4   1 2020    D
5   2 2020    A
6   2 2020    B
7   2 2020    Q
8   3 2011    G
9   4 2019    A
10  4 2019    B
11  4 2019    Q
12  4 2019    G
13  4 2020    C
14  4 2020    D
15  4 2020    T
16  4 2020    S
17  5 2018    S
18  5 2019    Z
19  5 2020    F

What I would like is another dataframe giving the number of times two different values of code co-occurred within groupings of of ID and year (in this example, A and B co-occurred 3 times, and A and C co-occurred 0 times), which I will then use for a network analysis.

So far I have this syntax:

1: Make a wide version of the data

library(tidyverse)
wide = df %>% 
        group_by(year, ID) %>% 
        mutate(row = row_number()) %>% 
        ungroup() %>% 
        pivot_wider(
            id_cols = c(ID, year),
            names_from = row, 
            names_prefix = "code_", 
            values_from = code
        )

2: Make a node list

nodes = distinct(df, code) %>% rowid_to_column("id")

3: Make an edge list

#edge list needs to be three vars: source, dest, and weight
# source and dest are simply code names that (potentially) co-occur in the same year for an ID
# weight is the number of times the codes co-occurred in the same year for each ID.

#all combinations of two codes
edges = combn(x = nodes$code, m = 2 ) %>% 
    t() %>% 
    as.data.frame()

colnames(edges) = c("source", "dest")
edges$weight = NA_integer_


#oh, no! a for() loop! a coder's last ditch effort to make something work
for(i in 1:nrow(edges)){
    
    source = edges$source[i]
    dest = edges$dest[i]
    

    #get the cases with the first code of interest
    temp = df %>% 
        filter( code == source ) %>% 
        select(ID, year)
    
    #get the other codes that occurred for that ID in that year
    temp = left_join(temp, 
                     wide, 
                     by = c("ID", "year"))
    
    
    #convert to a logical showing if the other codes are the one I want
    temp = temp %>% mutate_at(vars(starts_with("code_")),
                            function(x){ x == dest }
    ) 
    
    #sum the number of times `source` and `dest` co-occurred
    temp$dest = temp %>% select(starts_with("code_")) %>% rowSums(., na.rm=TRUE)
    edges$weight[i] = sum(temp$dest, na.rm = TRUE)
    
}

Edit to add the result:

Result:

edges
   source dest weight
1       A    B      3
2       A    C      0
3       A    D      0
4       A    Q      2
5       A    G      1
6       A    T      0
7       A    S      0
8       A    Z      0
9       A    F      0
10      B    C      0
11      B    D      0
12      B    Q      2
13      B    G      1
14      B    T      0
15      B    S      0
16      B    Z      0
17      B    F      0
18      C    D      2
19      C    Q      0
20      C    G      0
21      C    T      1
22      C    S      1
23      C    Z      0
24      C    F      0
25      D    Q      0
26      D    G      0
27      D    T      1
28      D    S      1
29      D    Z      0
30      D    F      0
31      Q    G      1
32      Q    T      0
33      Q    S      0
34      Q    Z      0
35      Q    F      0
36      G    T      0
37      G    S      0
38      G    Z      0
39      G    F      0
40      T    S      1
41      T    Z      0
42      T    F      0
43      S    Z      0
44      S    F      0
45      Z    F      0

That gives me what I want (a dataframe showing A and B co-occurred 3 times, A and C co-occurred 0 times, A and D co-occurred 0 times, A and G co-occurred 1 time, A and Q co-occurred 2 times, etc...). So this works, but it takes a second or two even for this small example. My real data set is ~3,000,000 observations. I let it run for a while but stopped it only to find it was ~1% complete.

Is there a better/faster way to do this?

Upvotes: 3

Views: 1036

Answers (2)

pseudospin
pseudospin

Reputation: 2767

Here's an alternative which just does a join instead and so is probably very fast for large data.

library(data.table)
setDT(df)
df[df, on = c('ID','year'), allow.cartesian = TRUE][
  code<i.code, .N, .(pair = paste0(code, i.code))]

#>     pair N
#>  1:   AB 3
#>  2:   CD 2
#>  3:   AQ 2
#>  4:   BQ 2
#>  5:   GQ 1
#>  6:   AG 1
#>  7:   BG 1
#>  8:   CT 1
#>  9:   DT 1
#> 10:   ST 1
#> 11:   CS 1
#> 12:   DS 1

Upvotes: 3

pseudospin
pseudospin

Reputation: 2767

This should work. You'll only get one entry for each pair because of the sort.

library(data.table)
setDT(df)
all_pairs <- function(x) {
  if (length(x) > 1) {
    sapply(combn(sort(x), 2, simplify = FALSE), paste, collapse = '')
  } else {
    c()
  }
}
df[,.(pairs = all_pairs(code)), .(ID, year)][,.N, .(pairs)]

#>     pairs N
#>  1:    AB 3
#>  2:    CD 2
#>  3:    AQ 2
#>  4:    BQ 2
#>  5:    AG 1
#>  6:    BG 1
#>  7:    GQ 1
#>  8:    CS 1
#>  9:    CT 1
#> 10:    DS 1
#> 11:    DT 1
#> 12:    ST 1

Upvotes: 0

Related Questions