Maher HTB
Maher HTB

Reputation: 737

Apply PCA on specific columns with Apache Spark

i am trying to apply PCA on a dataset that contains a header and contains fields Here is the code i used , any help to be able to select a specific columns on which we apply PCA .

val inputMatrix = sc.textFile("C:/Users/mhattabi/Desktop/Realase of 01_06_2017/TopDrive_WithoutConstant.csv").map { line =>
  val values = line.split(",").map(_.toDouble)
  Vectors.dense(values)
}

val mat: RowMatrix = new RowMatrix(inputMatrix)
val pc: Matrix = mat.computePrincipalComponents(4)
// Project the rows to the linear space spanned by the top 4 principal components.

val projected: RowMatrix = mat.multiply(pc)

//updated version i tried to do this

val spark = SparkSession.builder.master("local").appName("my-spark-app").getOrCreate()
val dataframe = spark.read.format("com.databricks.spark.csv")

val columnsToUse: Seq[String] =  Array("Col0","Col1", "Col2", "Col3", "Col4").toSeq
val k: Int = 2

val df = spark.read.format("csv").options(Map("header" -> "true", "inferSchema" -> "true")).load("C:/Users/mhattabi/Desktop/donnee/cassandraTest_1.csv")

val rf = new RFormula().setFormula(s"~ ${columnsToUse.mkString(" + ")}")
val pca = new PCA().setInputCol("features").setOutputCol("pcaFeatures").setK(k)

val featurized = rf.fit(df).transform(df)
//prinpal component
val principalComponent = pca.fit(featurized).transform(featurized)
principalComponent.select("pcaFeatures").show(4,false)

+-----------------------------------------+
|pcaFeatures                              |
+-----------------------------------------+
|[-0.536798281241379,0.495499034754084]   |
|[-0.32969328815797916,0.5672811417154808]|
|[-1.32283465170085,0.5982789033642704]   |
|[-0.6199718696225502,0.3173072633712586] |
+-----------------------------------------+

I got this for pricipal component , the question i want to save this in csv file and add header.Any help many thanks Any help would be appreciated .

Thanks a lot

Upvotes: 0

Views: 768

Answers (2)

eliasah
eliasah

Reputation: 40380

You can use the RFormula in this case :

import org.apache.spark.ml.feature.{RFormula, PCA}

val columnsToUse: Seq[String] = ???
val k: Int = ???

val df = spark.read.format("csv").options(Map("header" -> "true", "inferSchema" -> "true")).load("/tmp/foo.csv")

val rf = new RFormula().setFormula(s"~ ${columnsToUse.mkString(" + ")}")
val pca = new PCA().setInputCol("features").setK(k)

val featurized = rf.fit(df).transform(df)
val projected = pca.fit(featurized).transform(featurized)

Upvotes: 2

Nazarii Bardiuk
Nazarii Bardiuk

Reputation: 4342

java.lang.NumberFormatException: For input string: "DateTime"

it means that in your input file there is a value DateTime that you then try to convert to Double.

Probably it is somewhere in the header of you input file

Upvotes: 0

Related Questions