Reputation: 15
I have a problem with Apache job-server and my .jar with SparkJob.
I have VirtualBox with DataStax. There are Cassandra and Spark. I install Apache job-server from git job-server. I want run examples so I write sbt job-server-tests/package
and next run job-server from terminal sbt re-start
Examples from job-server work
curl --data-binary @/home/job-server/job-server-tests/target/job.jar localhost:8090/jars/test
curl -d "" 'localhost:8090/jobs?appName=test&classPath=spark.jobserver.LongPiJob'
Problem is when I make my .jar
I use eclipse on Windows with Scala IDE. I installed sbteclipse plugin and I made folder C:\Users\user\scalaWorkspace\LongPiJob
with scala project. I run cmd, go to this folder and run sbt eclipse
sbt compile
and sbt package
. Then I copy .jar to VirtualBox. Next I use 1. curl command. When I use 2. curl command I get a error
job-server[ERROR] Exception in thread "pool-25-thread-1" java.lang.AbstractMethodError: com.forszpaniak.LongPiJob$.validate(Ljava/lang/Object;Lcom/typesafe/config/Config;)Lspark/jobserver/SparkJobValidation; job-server[ERROR] at spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$4.apply(JobManagerActor.scala:225) job-server[ERROR] at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) job-server[ERROR] at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) job-server[ERROR] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) job-server[ERROR] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) job-server[ERROR] at java.lang.Thread.run(Thread.java:745)
in terminal where I started server. In curl terminal I get
[root@localhost spark-jobserver]# curl -d "stress.test.longpijob.duration=15" 'localhost:8090/jobs?appNametestJob1.5&classPath=com.forszpaniak.LongPiJob' { "status": "ERROR", "result": { "message": "Ask timed out on [Actor[akka://JobServer/user/context-supervisor/4538158c-com.forszpaniak.LongPiJob#-713999361]] after [10000 ms]", "errorClass": "akka.pattern.AskTimeoutException", "stack": ["akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:333)", "akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117)", "scala.concurrent.Future$InternalCallbackExecutor$.scala$concurrent$Future$InternalCallbackExecutor$$unbatchedExecute(Future.scala:694)", "scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:691)", "akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(Scheduler.scala:467)", "akka.actor.LightArrayRevolverScheduler$$anon$8.executeBucket$1(Scheduler.scala:419)", "akka.actor.LightArrayRevolverScheduler$$anon$8.nextTick(Scheduler.scala:423)", "akka.actor.LightArrayRevolverScheduler$$anon$8.run(Scheduler.scala:375)", "java.lang.Thread.run(Thread.java:745)"] }
I my .jar I use code from example LongPiJob.scala. I have searched some information about this server error and I think, it can be version problem?
java.lang.AbstractMethodError: com.forszpaniak.LongPiJob$.validate(Ljava/lang/Object;Lcom/typesafe/config/Config;)Lspark/jobserver/SparkJobValidation;
I think instead Object should be SparkContext...
I use DataStax: 4.6 job-server: 0.5.1 scala: 2.10.4 sbt: 0.13 spark: 1.1.0
Upvotes: 1
Views: 1580
Reputation: 16576
The Spark JobServer 0.5.1 is compatible with spark 1.3.0, you are using 1.1.0. I would try changing that to 0.4.1 first.
Version Spark Version
0.3.1 0.9.1
0.4.0 1.0.2
0.4.1 1.1.0
0.5.0 1.2.0
0.5.1 1.3.0
Then you may want to modify the startup_server.sh script such that it uses the DSE classpath. This should help you avoid other errors in the future.
Something like
dse spark-submit --class $MAIN $appdir/spark-job-server.jar --driver-java-options "$GC_OPTS $JAVA_OPTS $LOGGING_OPTS" $conffile 2>&1 &
Here is a repo where I modified the server startup script to work with DSE (4.7 but it should be similar for 4.6)
https://github.com/RussellSpitzer/spark-jobserver/blob/DSP-47-EAP3/bin/server_start.sh
Upvotes: 4