olala
olala

Reputation: 4456

dist() function in R: vector size limitation

I was trying to draw a hierarchical clustering of some samples (40 of them) over some features(genes) and I have a big table with 500k rows and 41 columns (1st one is name) and when I tried

d<-dist(as.matrix(file),method="euclidean")

I got this error

Error: cannot allocate vector of size 1101.1 Gb

How can I get around of this limitation? I googled it and came across to the ff package in R but I don't quite understand whether that could solve my issue.

Thanks!

Upvotes: 6

Views: 8234

Answers (3)

Yoann Pageaud
Yoann Pageaud

Reputation: 519

I just experience a related issue but with less rows (around 100 thousands for 16 columns).
RAM size is the limiting factor.
To limitate the need in memory space I used 2 different functions from 2 different packages. from parallelDist the function parDist() allow you to obtain the distances quite fast. it uses RAM of course during the process but it seems that the resulting dist object is taking less memory (no idea why).
Then I used the hclust() function but from the package fastcluster. fastcluster is actually not so fast on such an amount of data but it seems that it uses less memory than the default hclust().
Hope this will be useful for anybody who find this topic.

Upvotes: 3

Has QUIT--Anony-Mousse
Has QUIT--Anony-Mousse

Reputation: 77474

When dealing with large data sets, R is not the best choice.

The majority of methods in R seems to be implemented by computing a full distance matrix, which inherently needs O(n^2) memory and runtime. Matrix based implementations don't scale well to large data , unless the matrix is sparse (which a distance matrix per definition isn't).

I don't know if you realized that 1101.1 Gb is 1 Terabyte. I don't think you have that much RAM, and you probably won't have the time to wait for computing this matrix either.

For example ELKI is much more powerful for clustering, as you can enable index structures to accelerate many algorithms. This saves both memory (usually down to linear memory usage; for storing the cluster assignments) and runtime (usually down to O(n log n), one O(log n) operation per object).

But of course, it also varies from algorithm to algorithm. K-means for example, which needs point-to-mean distances only, does not need (and cannot use) an O(n^2) distance matrix.

So in the end: I don't think the memory limit of R is your actual problem. The method you want to use doesn't scale.

Upvotes: 3

zero323
zero323

Reputation: 330313

Generally speaking hierarchical clustering is not the best approach for dealing with very large datasets.

In your case however there is a different problem. If you want to cluster samples structure of your data is wrong. Observations should be represented as the rows, and gene expression (or whatever kind of data you have) as the columns.

Lets assume you have data like this:

data <- as.data.frame(matrix(rnorm(n=500000*40), ncol=40))

What you want to do is:

 # Create transposed data matrix
 data.matrix.t <- t(as.matrix(data))

 # Create distance matrix
 dists <- dist(data.matrix.t)

 # Clustering
 hcl <- hclust(dists)

 # Plot
 plot(hcl)

NOTE

You should remember that euclidean distances can be rather misleading when you work with high-dimensional data.

Upvotes: 5

Related Questions