Milan
Milan

Reputation: 981

Connect to Oracle DB using PySpark

I am trying to connect to an Oracle DB using PySpark.

spark_config = SparkConf().setMaster(config['cluster']).setAppName('sim_transactions_test').set("jars", "..\Lib\ojdbc7.jar")

sc = SparkContext(conf=spark_config)
sqlContext = SQLContext(sc)    

df_sim_input = self.sqlContext.read\
                        .format("jdbc")\
                        .option("driver", "oracle.jdbc.driver.OracleDriver")\
                        .option("url", config["db.url"])\
                        .option("dbtable", query)\
                        .option("user", config["db.user"])\
                        .option("password", config["db.password"])\
                        .load()

This gives me a

py4j.protocol.Py4JJavaError: An error occurred while calling o31.load.
: java.lang.ClassNotFoundException: oracle.jdbc.driver.OracleDriver

So it seems it cannot find the jar file in the SparkContext. It seems to be possible to load a PySpark shell with external jars, but I want to load them from the Python code.

Can someone explain to me how you can add this external jar from Python and make a query to an Oracle DB?

Extra question, how come that for a postgres DB the code works fine without importing an external jdbc? Is that because if it is installed on your system, it will automatically find it?

Upvotes: 1

Views: 5943

Answers (1)

Assaf Mendelson
Assaf Mendelson

Reputation: 13001

You should probably also set driver-class-path as jars sends the jar file only to workers, not the driver.

That said, you should be very careful when setting JVM configuration in the python code as you need to make sure the JVM loads with them (you can't add them later). You can try setting PYSPARK_SUBMIT_ARGS e.g.:

export PYSPARK_SUBMIT_ARGS="--jars jarname --driver-class-path jarname pyspark-shell"

This will tell pyspark to add these options to the JVM loading the same as if you would have added it in the command line

Upvotes: 1

Related Questions