philantrovert
philantrovert

Reputation: 10092

Spark JDBC fetchsize option

I currently have an application which is supposed to connect to different types of databases, run a specific query on that database using Spark's JDBC options and then write the resultant DataFrame to HDFS.

The performance was extremely bad for Oracle (didn't check for all of them). Turns out it was because of the fetchSize property which is 10 rows by default for Oracle. So I increased it to 1000 and the performance gain was quite visible. Then, I changed it to 10000 but then some of the tables started failing with an out of memory issue in the executor ( 6 executors, 4G memory each, 2G driver memory ).

My questions are :

Upvotes: 11

Views: 25204

Answers (2)

Ion Freeman
Ion Freeman

Reputation: 540

To answer @y2k-shubham's follow up question "do I pass it inside connectionProperties param", per the current docs the answer is "Yes", but note the lower-cased 's'.

fetchsize The JDBC fetch size, which determines how many rows to fetch per round trip. This can help performance on JDBC drivers which default to low fetch size (eg. Oracle with 10 rows). This option applies only to reading.

Upvotes: 1

T. Gawęda
T. Gawęda

Reputation: 16076

Fetch Size It's just a value for JDBC PreparedStatement.

You can see it in JDBCRDD.scala:

 stmt.setFetchSize(options.fetchSize)

You can read more about JDBC FetchSize here

One thing you can also improve is to set all 4 parameters, that will cause parallelization of reading. See more here. Then your reading can be splitted into many machines, so memory usage for every of them may be smaller.

For details which JDBC Options are supported and how, you must search for your Driver documentation - every driver may have it's own behaviour

Upvotes: 2

Related Questions