data paRty
data paRty

Reputation: 218

Numeric matrix is taking far more memory than it should - R

I am creating a document term matrix (dtm for short) for a Naive Bayes implementation (I know there is a function for this, but I have to code it myself for homework.) I wrote a function that successfully creates the dtm, the problem is that the resulting matrix is taking up too much memory. For example a 100 x 32000 matrix (of 0's and 1's) is 24MB in size! This is resulting in crashy behavior in r when trying to work with the full 10k documents. The functions follow and a toy example is in the last 3 lines. Can anyone spot why the "sparser" function in particular is returning such memory-intensive results?

listAllWords <- function(docs)
{
  str1 <- strsplit(x=docs, split="\\s", fixed=FALSE)
  dictDupl <- unlist(str1)[!(unlist(str1) %in% stopWords)]
  dictionary <- unique(dictDupl)
}

#function to create the sparse matrix of words as they appear in each article segment
sparser <- function (docs, dictionary) 
{
  num.docs <- length(docs) #dtm rows
  num.words <- length(dictionary) #dtm columns
  dtm <- mat.or.vec(num.docs,num.words) # Instantiate dtm of zeroes
  for (i in 1:num.docs)
  {
    doc.temp <- unlist(strsplit(x=docs[i], split="\\s", fixed=FALSE)) #vectorize words
    num.words.doc <- length(doc.temp)
    for (j in 1:num.words.doc)
    {
      ind <- which(dictionary == doc.temp[j]) #loop over words and find index in dict.
      dtm[i,ind] <- 1 #indicate this word is in this document
    }
  }
  return(dtm)
}


docs <- c("the first document contains words", "the second document is also made of words", "the third document is words and a number 4")
dictionary <- listAllWords(docs)
dtm <- sparser(docs,dictionary)

If it makes any difference I am running this in R Studio in Mac OSX, 64 bit

Upvotes: 1

Views: 109

Answers (3)

Dirk is no longer here
Dirk is no longer here

Reputation: 368509

If you really want to be economical look at the ff and bit packages.

Upvotes: 0

Andrey Shabalin
Andrey Shabalin

Reputation: 4614

If you want to store 0/1 values economically, I would suggest raw type.

m8 <- matrix(0,100,32000)
m4 <- matrix(0L,100,32000)
m1 <- matrix(raw(1),100,32000)

The raw type takes just 1 byte per value:

> object.size(m8)
25600200 bytes
> object.size(m4)
12800200 bytes
> object.size(m1)
3200200 bytes

Here is how to operate with it:

> m1[2,2] = as.raw(1)
> m1[2,2]
[1] 01
> as.integer(m1[2,2])
[1] 1

Upvotes: 0

joran
joran

Reputation: 173697

Surely part of your problem is that you are not actually storing integers, but doubles. Note:

m <- mat.or.vec(100,32000)
m1 <- matrix(0L,100,32000)

> object.size(m)
25600200 bytes
> object.size(m1)
12800200 bytes

And note the lack of the "L" in the code for mat.or.vec:

> mat.or.vec
function (nr, nc) 
if (nc == 1L) numeric(nr) else matrix(0, nr, nc)
<bytecode: 0x1089984d8>
<environment: namespace:base>

You will also want to explicitly assign 1L, otherwise R will convert everything to doubles upon the first assignment, I think. You can verify that by simply assigning one value of m1 above the value 1 and recheck the object size.

I should probably also mention the function storage.mode which can help you to verify that you're using integers.

Upvotes: 1

Related Questions