Reputation: 951
spark-submit --packages com.databricks:spark-redshift_2.11:2.0.1 --jars /usr/share/aws/redshift/jdbc/RedshiftJDBC4.jar /home/hadoop/test.py
How to specify the above (pyspark ) spark-submit command in Apache livy format?
I tried the following:
curl -X POST --data '{"file": "/home/hadoop/test.py", "conf":
{"com.databricks": "spark-redshift_2.11:2.0.1"}, \
"queue": "my_queue", "name": "Livy Example", "jars" :
"/usr/share/aws/redshift/jdbc/RedshiftJDBC4.jar"}', \
-H "Content-Type: application/json" localhost:8998/batches
Refered the following livy article spark livy rest api
Also getting the following error:
"Unexpected character ('“' (code 8220 / 0x201c)): was expecting double-quote to start field name\n at [Source: (org.eclipse.jetty.server.HttpInputOverHTTP); line: 1, column: 37]
Upvotes: 1
Views: 7395
Reputation: 11449
Your command is wrong , Please use following example to construct command .
spark-submit command
./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--jars a.jar,b.jar \
--pyFiles a.py,b.py \
--files foo.txt,bar.txt \
--archives foo.zip,bar.tar \
--master yarn \
--deploy-mode cluster \
--driver-memory 10G \
--driver-cores 1 \
--executor-memory 20G \
--executor-cores 3 \
--num-executors 50 \
--queue default \
--name test \
--proxy-user foo \
--conf spark.jars.packages=xxx \
/path/to/examples.jar \
1000
Livy REST JSON protocol
{
“className”: “org.apache.spark.examples.SparkPi”,
“jars”: [“a.jar”, “b.jar”],
“pyFiles”: [“a.py”, “b.py”],
“files”: [“foo.txt”, “bar.txt”],
“archives”: [“foo.zip”, “bar.tar”],
“driverMemory”: “10G”,
“driverCores”: 1,
“executorCores”: 3,
“executorMemory”: “20G”,
“numExecutors”: 50,
“queue”: “default”,
“name”: “test”,
“proxyUser”: “foo”,
“conf”: {“spark.jars.packages”: “xxx”},
“file”: “hdfs:///path/to/examples.jar”,
“args”: [1000],
}
--packages. All transitive dependencies will be handled when using this command.
In Livy you need to go to interpreter settings page and add the new property under livy settings -
livy.spark.jars.packages
And the value
com.databricks:spark-redshift_2.11:2.0.1
Restart the interpreter and retry the query.
Upvotes: 1