Reputation: 125
I am runing a kmeans algorithm, I create a VectorAssembler
, set inputcols
to ("longitude", "latitude") and the outputCol
to ("location").
I need to cluster my data from a json file to 3 cluster.
I classify the data by Longitude and latitude and I create the Vector Location to connect both.
Location and latitude are DoubleType.
I think its because of the location vector
I am getting the error below:
19/04/08 15:20:56 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
org.apache.spark.SparkException: Failed to execute user defined function(VectorAssembler$$Lambda$1629/684426930: (struct<latitude:double,longitude:double>) => struct<type:tinyint,size:int,indices:array<int>,values:array<double>>)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
here is my code:
import org.apache.spark.sql.{SQLContext, SparkSession}
import org.apache.spark.sql.types.{DataTypes, DoubleType, StructType}
import org.apache.spark.ml.clustering.{KMeans, KMeansModel}
import org.apache.spark.ml.evaluation.ClusteringEvaluator
import org.apache.spark.ml.feature
import org.apache.spark.{SparkConf, SparkContext, sql}
import org.apache.spark.ml.feature.{Binarizer, Interaction, VectorAssembler}
import org.apache.spark.sql
import org.apache.spark.sql.expressions.UserDefinedFunction
//plotting
object Clustering_kmeans {
def main(args: Array[String]): Unit = {
println("hello world me")
// Spark Session
val sc = SparkSession.builder().appName("Clustering_Kmeans").master("local[*]").getOrCreate()
import sc.implicits._
sc.sparkContext.setLogLevel("WARN")
// Loads data.
val stations = sc.sqlContext.read.option("multiline",true).json("/home/aymenstien/Téléchargements/Brisbane_CityBike.json")
// trans
val st = stations.withColumn("longitude", $"longitude".cast(sql.types.DoubleType))
.withColumn("latitude", $"latitude".cast(sql.types.DoubleType)).cache()
val stationVA = new VectorAssembler().setInputCols(Array("latitude","longitude")).setOutputCol("location")
val stationWithLoc =stationVA.transform(st)
//print
stationWithLoc.show(truncate = false)
stationWithLoc.printSchema()
//val x = st.select('longitude).as[Double].collect()
//val y = st.select('latitude).as[Double].collect()
//st.printSchema()
}
}
here is Schema
root
|-- address: string (nullable = true)
|-- coordinates: struct (nullable = true)
| |-- latitude: double (nullable = true)
| |-- longitude: double (nullable = true)
|-- id: double (nullable = true)
|-- latitude: double (nullable = true)
|-- longitude: double (nullable = true)
|-- name: string (nullable = true)
|-- position: string (nullable = true)
|-- location: vector (nullable = true)
Upvotes: 1
Views: 229
Reputation: 145
Since you have nullable = true
for all features, if you do have any nulls, VectorAssembler will throw an error. Try setting handleInvalid
to "skip"
. This will filter out rows with any nulls.
val stationVA = new VectorAssembler().
setInputCols(Array("latitude","longitude")).
setOutputCol("location").
setHandleInvalid("skip")
Upvotes: 1