VJune
VJune

Reputation: 1215

PySpark computing correlation

I want to use pyspark.mllib.stat.Statistics.corr function to compute correlation between two columns of pyspark.sql.dataframe.DataFrame object. corr function expects to take an rdd of Vectors objects. How do I translate a column of df['some_name'] to rdd of Vectors.dense object?

Upvotes: 18

Views: 45797

Answers (4)

Mohan
Mohan

Reputation: 81

from pyspark.ml.stat import Correlation
from pyspark.ml.linalg import DenseMatrix, Vectors
from pyspark.ml.feature import VectorAssembler
from pyspark.sql.functions import *

# Loading Data with more than 50 features
newdata = spark.read.csv("sample*.csv",inferSchema=True,header=True)

assembler = VectorAssembler(inputCols=newdata.columns, 
outputCol="features",handleInvalid='keep')
df = assembler.transform(newdata).select("features")

# correlation will be in Dense Matrix
correlation = Correlation.corr(df,"features","pearson").collect()[0][0]

# To convert Dense Matrix into DataFrame
rows = correlation.toArray().tolist()
df = spark.createDataFrame(rows,newdata.columns)

Upvotes: 4

MUK
MUK

Reputation: 411

df.stat.corr("column1","column2")

Upvotes: 7

zero323
zero323

Reputation: 330303

There should be no need for that. For numerical you can compute correlation directly using DataFrameStatFunctions.corr:

df1 = sc.parallelize([(0.0, 1.0), (1.0, 0.0)]).toDF(["x", "y"])
df1.stat.corr("x", "y")
# -1.0

otherwise you can use VectorAssembler:

from pyspark.ml.feature import VectorAssembler

assembler = VectorAssembler(inputCols=df.columns, outputCol="features")
assembler.transform(df).select("features").flatMap(lambda x: x)

Upvotes: 32

VJune
VJune

Reputation: 1215

Ok I figured it out:

v1 = df.flatMap(lambda x: Vectors.dense(x[col_idx_1]))
v2 = df.flatMap(lambda x: Vectors.dense(x[col_idx_2])) 

Upvotes: 1

Related Questions