Andrej
Andrej

Reputation: 3839

Speed up the lookup procedure

I have two tables: coc_data and DT. coc_data table contains co-occurrence frequency between pair of words. Its structure is similar to:

   word1 word2 freq
1      A     B    1
2      A     C    2
3      A     D    3
4      A     E    2

Second table, DT contains frequencies for each word for different years, e.g.:

   word year weight
1     A 1966      9
2     A 1967      3
3     A 1968      1
4     A 1969      4
5     A 1970     10
6     B 1966      9

In reality, coc_data has currently 150.000 rows and DT has about 450.000 rows. Below is R code, which simulate both datasets.

# Prerequisites
library(data.table)
set.seed(123)
n <- 5

# Simulate co-occurrence data [coc_data]
words <- LETTERS[1:n]
# Times each word used
freq <- sample(10, n, replace = TRUE)
# Co-occurrence data.frame
coc_data <- setNames(data.frame(t(combn(words,2))),c("word1", "word2"))
coc_data$freq <- apply(combn(freq, 2), 2, function(x) sample(1:min(x), 1))

# Simulate frequency table [DT]
years <- (1965 + 1):(1965 + 5)
word <- sort(rep(LETTERS[1:n], 5))
year <- rep(years, 5)
weight <- sample(10, 25, replace = TRUE)
freq_data <- data.frame(word = word, year = year, weight = weight)
# Combine to data.table for speed
DT <- data.table(freq_data, key = c("word", "year"))

My task is to normalize frequencies in coc_data table according to frequencies in DT table using the following function:

my_fun <- function(x, freq_data, years) {
  word1 <- x[1]
  word2 <- x[2]
  freq12 <- as.numeric(x[3])
  freq1 <- sum(DT[word == word1 & year %in% years]$weight)
  freq2 <- sum(DT[word == word2 & year %in% years]$weight)
  ei <- (freq12^2) / (freq1 * freq2)
  return(ei)
}

Then I use apply() function to apply my_fun function to each row of the coc_data table:

apply(X = coc_data, MARGIN = 1, FUN = my_fun, freq_data = DT, years = years)

Because DT lookup table is quite large the whole mapping process take very long. I wonder how could I improve my code to speed up the computation.

Upvotes: 2

Views: 76

Answers (1)

Assaf Wool
Assaf Wool

Reputation: 64

Since the years parameter is constant in my_fun for the actual usage using apply, you could compute the frequencies for all words first:

f<-aggregate(weight~word,data=DT,FUN=sum)

Now transform this into a hash, e.g.:

hs<-f$weight
names(hs)<-f$word

Now in my_fun use the precomputed frequencies by looking up hs[word]. This should be faster.

Even better - the answer you're looking for is

(coc_data$freq)^2 / (hs[coc_data$word1] * hs[coc_data$word2])

The data.table implementation of this would be:

f <- DT[, sum(weight), word]
vec <- setNames(f$V1, f$word)

setDT(coc_data)[, freq_new := freq^2 / (vec[word1] * vec[word2])]

which gives the following result:

> coc_data
    word1 word2 freq     freq_new
 1:     A     B    1 0.0014792899
 2:     A     C    1 0.0016025641
 3:     A     D    1 0.0010683761
 4:     A     E    1 0.0013262599
 5:     B     C    5 0.0434027778
 6:     B     D    1 0.0011574074
 7:     B     E    1 0.0014367816
 8:     C     D    4 0.0123456790
 9:     C     E    1 0.0009578544
10:     D     E    2 0.0047562426

Upvotes: 2

Related Questions