Chris L
Chris L

Reputation: 338

How do I append to a Document Term Matrix in R?

I would like to append two Document Term Matrices together. I have one row of data and would like to use different control functions on them (an n-gram tokenizer, removal of stopwords, and wordLength bounds for text, none of these for my non-text fields).

When I use the tm_combine: c(dtm_text,dtm_inputs) it adds the second set as a new row. I want to append these attributes to the same row.

library("tm")   

  BigramTokenizer <-
  function(x)
    unlist(lapply(ngrams(words(x), 2), paste, collapse = " "), 
           use.names = FALSE)

# Data to be tokenized
 txt_fields   <- paste("i like your store","i love your products","i am happy")
# Data not to be tokenized
 other_inputs <- paste("cd1_ABC","cd2_555","cd3_7654")

 # NGram tokenize text data 
  dtm_text <- DocumentTermMatrix(Corpus(VectorSource(txt_fields)),
                               control = list(
                                              tokenize = BigramTokenizer,


                                      stopwords=TRUE,
                                                  wordLengths=c(2, Inf),
                                                  bounds=list(global = c(1,Inf))))

    # Do not perform tokenization of other inputs
      dtm_inputs <- DocumentTermMatrix(Corpus(VectorSource(other_inputs)),
                                   control = list(
                                                  bounds = list(global = c(1,Inf))))
    # DESIRED OUTPUT
<<DocumentTermMatrix (documents: 1, terms: 12)>>
Non-/sparse entries: 12/0
Sparsity           : 0%
Maximal term length: 13
Weighting          : term frequency (tf)

    Terms
Docs am happy happy like like your love love your products products am store store love
   1        1     1    1         1    1         1        1           1     1          1
    Terms
Docs your products your store cd1_abc cd2_555 cd3_7654
   1       1       1        1
   1             1          1

Upvotes: 2

Views: 1504

Answers (2)

Dmitriy Selivanov
Dmitriy Selivanov

Reputation: 4595

I suggest to use text2vec (but I'm biased, since I'm the author).

library(text2vec)
# Data to be tokenized
txt_fields   <- paste("i like your store","i love your products","i am happy")
# Data not to be tokenized
other_inputs <- paste("cd1_ABC","cd2_555","cd3_7654")
stopwords = tm::stopwords("en")

# tokenize by whitespace
txt_toknens = strsplit(txt_fields, ' ', TRUE)
vocab = create_vocabulary(itoken(txt_toknens), ngram = c(1, 2), stopwords = stopwords)
# if you need word lengths:
# vocab$vocab = vocab$vocab[nchar(terms) > 1]
# but note, it will not remove "i_am", etc.
# you can add word "i" to stopwords to remove such terms
txt_vectorizer = vocab_vectorizer(vocab)
dtm_text = create_dtm(itoken(txt_fields),  vectorizer = txt_vectorizer)

# also tokenize by whitespace, but won't create bigrams in next step
other_inputs_toknes = strsplit(other_inputs, ' ', TRUE)
vocab_other = create_vocabulary(itoken(other_inputs))
other_vectorizer = vocab_vectorizer(vocab_other)
dtm_other = create_dtm(itoken(other_inputs),  vectorizer = other_vectorizer)
# combine
result = cbind(dtm_text, dtm_other)

Upvotes: 2

Gregory Demin
Gregory Demin

Reputation: 4846

dtm_combined = as.DocumentTermMatrix(cbind(dtm_text, dtm_inputs), weighting = weightTf)
inspect(dtm_combined)
# <<DocumentTermMatrix (documents: 1, terms: 8)>>
#     Non-/sparse entries: 8/0
# Sparsity           : 0%
# Maximal term length: 8
# Weighting          : term frequency (tf)
# 
# Terms
# Docs happy like love products store cd1_abc cd2_555 cd3_7654
# 1     1    1    1        1     1       1       1        1

But it will give wrong results if you have the same words in the dtm_text and in the dtm_inputs. This words won't be combined and will appear twice in the dtm_combined.

Upvotes: 0

Related Questions