user3892050
user3892050

Reputation: 51

spark jobserver ERROR classnotfoundexception

I have been trying spark using spark-shell. All my data is in sql.

  I used to include external jars using the --jars flag like /bin/spark-shell --jars /path/to/mysql-connector-java-5.1.23-bin.jar --master spark://sparkmaster.com:7077

  I have included it in class path by changing  the bin/compute-classpath.sh file 
  I was running succesfully with this config. 

Now when I am running a standalone job through jobserver. I am getting the following error message

result: {
    "message" : "com.mysql.jdbc.Driver"
    "errorClass" : "java.lang.classNotFoundException"
    "stack" :[.......]
}

I have included the jar file in my local.conf file as below. context-settings{ ..... dependent-jar-uris = ["file:///absolute/path/to/the/jarfile"] ...... }

Upvotes: 2

Views: 838

Answers (3)

snow04
snow04

Reputation: 159

curl --data-binary @/PATH/jobs_jar_2.10-1.0.jar 192.168.0.115:8090/jars/job_to_be_registered

For posting dependency jar

curl -d "" 'http://192.168.0.115:8090/contexts/new_context?dependent-jar-uris=file:///path/dependent.jar'

This works for jobserver 1.6.1

Upvotes: 0

Scott Kidder
Scott Kidder

Reputation: 450

All of your dependencies should be included in your spark-jobserver application JAR (e.g. create an "uber-jar"), or be included on the classpath of the Spark executors. I recommend configuring the classpath, as it's faster and requires less disk-space since the third-party library dependencies don't need to be copied to each worker whenever your application runs.

Here are the steps to configure the worker (executor) classpath on Spark 1.3.1:

  1. Copy the third-party JAR(s) to each of your Spark workers and the Spark master
  2. Place the JAR(s) in the same directory on each host (e.g. /home/ec2-user/lib
  3. Add the following line to the Spark /root/spark/conf/spark-defaults.conf file on the Spark master:

    spark.executor.extraClassPath /root/ephemeral-hdfs/conf:/home/ec2-user/lib/name-of-your-jar-file.jar

    Here's an example of my own modifications to use the Stanford NLP library:

    spark.executor.extraClassPath /root/ephemeral-hdfs/conf:/home/ec2-user/lib/stanford-corenlp-3.4.1.jar:/home/ec2-user/lib/stanford-corenlp-3.4.1-models.jar

Upvotes: 2

Jishnu Prathap
Jishnu Prathap

Reputation: 2033

You might not be having /path/to/mysql-connector-java-5.1.23-bin.jar in your workers.
You can either copy required dependency to all spark workers or Bundle the submitting jar with required dependencies.
I use maven for building the jar. The scope of dependencies must be run-time.

Upvotes: 0

Related Questions