Reputation: 327
I have setup a 3 Node hadoop cluster with HA for Namenode and ResourceManager. I have also installed Spark Job Server in one of the NameNode machine.
i have tested running job-server-test examples like WordCount Example and LongPi Job and it works perfect without issues. I am also able to issue curl command from remote host to read out the result via Spark Job Server.
But , when i upload "spark-examples-1.6.0-hadoop2.6.0.jar" into spark-job-server/jars and tried to run SparkPi job it fails ,
[hduser@ptfhadoop02v lib]$ curl -d "" 'ptfhadoop01v:8090/jobs?appName=SparkPi&classPath=org.apache.spark.examples.SparkPi'
{
"status": "ERROR",
"result": {
"message": "Ask timed out on [Actor[akka://JobServer/user/context-supervisor/ece2be39-org.apache.spark.examples.SparkPi#-630965857]] after [10000 ms]",
"errorClass": "akka.pattern.AskTimeoutException",
"stack":["akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:334)", "akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117)", "scala.concurrent.Future$InternalCallbackExecutor$.scala$concurrent$Future$InternalCallbackExecutor$$unbatchedExecute(Future.scala:694)", "scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:691)", "akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(Scheduler.scala:467)", "akka.actor.LightArrayRevolverScheduler$$anon$8.executeBucket$1(Scheduler.scala:419)", "akka.actor.LightArrayRevolverScheduler$$anon$8.nextTick(Scheduler.scala:423)", "akka.actor.LightArrayRevolverScheduler$$anon$8.run(Scheduler.scala:375)", "java.lang.Thread.run(Thread.java:745)"]
}
I also tried to manually place SparkPi.scala job under /usr/local/hadoop/spark-jobserver/job-server-tests/src/spark.jobserver and build the package using SBT , but it throws out the same error.
Version Information
[hduser@ptfhadoop01v spark.jobserver]$ sbt sbtVersion
[info] Set current project to spark-jobserver (in build file:/usr/local/hadoop/spark-jobserver/job-server-tests/src/spark.jobserver/)
[info] 0.13.11
Spark Version - spark-1.6.0
Scala Version - 2.10.4
Any suggestion on how to get rid of this error and get the output from the spark-examples jar file
Upvotes: 0
Views: 1378
Reputation: 327
package spark.jobserver
import com.typesafe.config.{Config, ConfigFactory}
import org.apache.spark._
import org.apache.spark.SparkContext._
import scala.math.random
/** Computes an approximation to pi */
object SparkPi extends SparkJob {
def main(args: Array[String]) {
val conf = new SparkConf().setMaster("local[4]").setAppName("SparkPi")
val sc = new SparkContext(conf)
val config = ConfigFactory.parseString("")
val results = runJob(sc, config)
println("Pi is roughly " + results)
}
override def validate(sc: SparkContext, config: Config):SparkJobValidation = {
SparkJobValid
}
override def runJob(sc: SparkContext, config: Config): Any = {
val slices = if (args.length > 0) args(0).toInt else 2
val n = math.min(100000L * slices, Int.MaxValue).toInt
val count = sc.parallelize(1 until n, slices).map { i =>
val x = random * 2 - 1
val y = random * 2 - 1
if (x*x + y*y < 1) 1 else 0
}.reduce(_ + _)
(4.0 * count / n)
}
}
I managed to get it working by modifying the code to extend SparkJob Thanks for the clarification
Upvotes: 0