Reputation: 2968
I am following the instructions for starting a Google DataProc cluster with an initialization script to start a jupyter notebook.
How can I include extra JAR files (spark-xml, for example) in the resulting SparkContext in Jupyter notebooks (particularly pyspark)?
Upvotes: 5
Views: 3596
Reputation: 2683
The answer depends slightly on which jars you're looking to load. For example, you can use spark-xml with the following when creating a cluster:
$ gcloud dataproc clusters create [cluster-name] \
--zone [zone] \
--initialization-actions \
gs://dataproc-initialization-actions/jupyter/jupyter.sh \
--properties spark:spark.jars.packages=com.databricks:spark-xml_2.11:0.4.1
To specify multiple Maven coordinates, you will need to swap the gcloud dictionary separator character from ',' to something else (as we need to use that to separate the packages to install):
$ gcloud dataproc clusters create [cluster-name] \
--zone [zone] \
--initialization-actions \
gs://dataproc-initialization-actions/jupyter/jupyter.sh \
--properties=^#^spark:spark.jars.packages=artifact1,artifact2,artifact3
Details on how escape characters are changed can be found in gcloud:
$ gcloud help topic escaping
Upvotes: 7