Kumar Roshan Mehta
Kumar Roshan Mehta

Reputation: 3316

Distributed cross correlation matrix computation

How can I calculate pearson cross correlation matrix of large (>10TB) data set, possibly in distributed manner ? Any efficient distributed algorithm suggestion will be appreciated.

update: I read the implementation of apache spark mlib correlation

Pearson Computaation:
/home/d066537/codespark/spark/mllib/src/main/scala/org/apache/spark/mllib/stat/correlation/Correlation.scala
Covariance Computation:
/home/d066537/codespark/spark/mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/RowMatrix.scala

but for me it looks like all the computation is happening at one node and it is not distributed in real sense.

Please put some light in here. I also tried executing it on a 3 node spark cluster and below are the screenshot:

Entire Computation timeline One the task details

As you can see from 2nd image that data is pulled up at one node and then computation is being done.Am i right in here ?

Upvotes: 9

Views: 1172

Answers (2)

Jihun No
Jihun No

Reputation: 1221

Each local data sets can converted into stdv and covariances. Also stdv and covariance and sum make correlation.

This is working example https://github.com/jeesim2/distributed-correlation

Upvotes: 0

dangiankit
dangiankit

Reputation: 119

To start with, have a look at this to see if things are going right. You may then refer to any of these implementations: MPI/OpenMP: Agomezl or Meismyles, MapReduce: Vangjee or Seawolf42. It'd also be interesting to read this before you proceed. On a different note, James's thesis provides some pointers if you're interested in computing the correlations that are robust to outliers.

Upvotes: 5

Related Questions