Reputation: 11
I'm experimenting with the JobServer
and would like to use it in our production environment.
I want to use mllib
and spark-jobserver
together, but i got a error (at the spark-jobserver, when a job is sended).
job-server[ERROR] Uncaught error from thread [JobServer-akka.actor.default-dispatcher-3] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[JobServer]
job-server[ERROR] java.lang.NoClassDefFoundError: org/apache/spark/mllib/stat/Statistics$
job-server[ERROR] at SparkCorrelation$.getCorrelation(SparkCorrelation.scala:50)
job-server[ERROR] at SparkCorrelation$.runJob(SparkCorrelation.scala:28)
job-server[ERROR] at SparkCorrelation$.runJob(SparkCorrelation.scala:11)
job-server[ERROR] at spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$4.apply(JobManagerActor.scala:234)
I am using spark-jobserver 0.5.0
and spark 1.2
.
any idea about it?
Code:
def getCorrelation(sc: SparkContext):Double={
val pathFile = "hdfs://localhost:9000/user/hduser/correlacion.csv"
val fileData = getFileData(sc,pathFile)
val colX = getDoubleColumn(fileData,1)
val colY = getDoubleColumn(fileData,2)
Statistics.corr(colX,colY,"pearson")
}
override def runJob(sc: SparkContext, config: Config): Any = {/*
val dd = sc.parallelize(config.getString("input.string").split(" ").toSeq)
dd.map((_, 1)).reduceByKey(_ + _).collect().toMap*/
getCorrelation(sc)
}
Upvotes: 1
Views: 164
Reputation: 452
In case you still want to know.
Just use SPARK-CLASSPATH
to link to MLlib in local mode.
Alternatively, just modify Dependencies.scala to access Mllib. Just add it to the sequence in the lazy val SparkDeps
.
Both solutions found here:
https://github.com/spark-jobserver/spark-jobserver/issues/341
https://github.com/spark-jobserver/spark-jobserver/issues/138
Upvotes: 1