vallim
vallim

Reputation: 308

How to run sqlContext in the spark-jobserver

I'm trying to execute locally a job in the spark-jobserver. My application has the dependencies below:

name := "spark-test"

version := "1.0"

scalaVersion := "2.10.6"

resolvers += Resolver.jcenterRepo

libraryDependencies += "org.apache.spark"  %%  "spark-core"  %  "1.6.1"
libraryDependencies += "spark.jobserver"  %%  "job-server-api" % "0.6.2" % "provided"
libraryDependencies += "com.datastax.spark" %% "spark-cassandra-connector" % "1.6.2"
libraryDependencies += "org.apache.spark" %% "spark-sql" % "1.6.2"
libraryDependencies += "com.holdenkarau" % "spark-testing-base_2.10" % "1.6.2_0.4.7" % "test"

I've generated the application package using:

sbt assembly

After that, I've submitted the package like this:

curl --data-binary @spark-test-assembly-1.0.jar localhost:8090/jars/myApp

When I triggered the job, I got the following error:

{
  "duration": "0.101 secs",
  "classPath": "jobs.TransformationJob",
  "startTime": "2017-02-17T13:01:55.549Z",
  "context": "42f857ba-jobs.TransformationJob",
  "result": {
    "message": "java.lang.Exception: Could not find resource path for Web UI: org/apache/spark/sql/execution/ui/static",
    "errorClass": "java.lang.RuntimeException",
    "stack": ["org.apache.spark.ui.JettyUtils$.createStaticHandler(JettyUtils.scala:180)", "org.apache.spark.ui.WebUI.addStaticHandler(WebUI.scala:117)", "org.apache.spark.sql.execution.ui.SQLTab.<init>(SQLTab.scala:34)", "org.apache.spark.sql.SQLContext$$anonfun$createListenerAndUI$1.apply(SQLContext.scala:1369)", "org.apache.spark.sql.SQLContext$$anonfun$createListenerAndUI$1.apply(SQLContext.scala:1369)", "scala.Option.foreach(Option.scala:236)", "org.apache.spark.sql.SQLContext$.createListenerAndUI(SQLContext.scala:1369)", "org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:77)", "jobs.TransformationJob$.runJob(TransformationJob.scala:64)", "jobs.TransformationJob$.runJob(TransformationJob.scala:14)", "spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$4.apply(JobManagerActor.scala:301)", "scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)", "scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)", "java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)", "java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)", "java.lang.Thread.run(Thread.java:745)"]
  },
  "status": "ERROR",
  "jobId": "a6bd6f23-cc82-44f3-8179-3b68168a2aa7"
}

Here is the part of the application that is failing:

override def runJob(sparkCtx: SparkContext, config: Config): Any = {
    val sqlContext = new SQLContext(sparkCtx)
    ...
}

I have some questions:

1) I've noticed that to run spark-jobserver local I don't need to have spark installed. Does spark-jobserver already come with spark embedded?

2) How do I know what is the version of the spark that is being used by spark-jobserver? Where is that?

3) I'm using the version 1.6.2 of the spark-sql. Should I change it or keep it?

If anyone can answer my questions, I will be very grateful.

Upvotes: 1

Views: 379

Answers (1)

noorul
noorul

Reputation: 1353

  1. Yes, spark-jobserver has spark dependencies. Instead of job-server/reStart you should use job-server-extras/reStart which will help you to get sql related dependencies.
  2. Look at project/Versions.scala
  3. You don't need spark-sql I think because it is included if you run job-server-extras/reStart

Upvotes: 1

Related Questions