Prakhar
Prakhar

Reputation: 536

Reading data from Amazon redshift in Spark 2.4

We used to read data in Spark 2.3 using databricks with the following code segment Spark-Shell initialization :

spark-shell --jars RedshiftJDBC42-1.2.10.1009.jar --packages com.databricks:spark-redshift_2.11:3.0.0-preview1,com.databricks:spark-avro_2.11:3.2.0

and then

val url = "jdbc:redshift://cluster-link?user=username&password=password"
val queryFinal = "select count(*) as cnt from table1"
val df = spark.read.format("com.databricks.spark.redshift").option("url", url).option("tempdir", "s3n://temp-bucket/").option("query",queryFinal).option("forward_spark_s3_credentials", "true").load().cache

With the recent upgrade of Spark 2.4, We are unable to do so and are getting the following exception

java.lang.AbstractMethodError: com.databricks.spark.redshift.RedshiftFileFormat.supportDataType(Lorg/apache/spark/sql/types/DataType;Z)Z
  at org.apache.spark.sql.execution.datasources.DataSourceUtils$$anonfun$verifySchema$1.apply(DataSourceUtils.scala:48)
  at org.apache.spark.sql.execution.datasources.DataSourceUtils$$anonfun$verifySchema$1.apply(DataSourceUtils.scala:47)
  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
  at org.apache.spark.sql.types.StructType.foreach(StructType.scala:99)
  at org.apache.spark.sql.execution.datasources.DataSourceUtils$.verifySchema(DataSourceUtils.scala:47)
  at org.apache.spark.sql.execution.datasources.DataSourceUtils$.verifyReadSchema(DataSourceUtils.scala:39)
  at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:400)
  at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
  at com.databricks.spark.redshift.RedshiftRelation.buildScan(RedshiftRelation.scala:168)
  at org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$10.apply(DataSourceStrategy.scala:293)
  at org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$10.apply(DataSourceStrategy.scala:293)
  at org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$pruneFilterProject$1.apply(DataSourceStrategy.scala:326)
  at org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$pruneFilterProject$1.apply(DataSourceStrategy.scala:325)
  at org.apache.spark.sql.execution.datasources.DataSourceStrategy.pruneFilterProjectRaw(DataSourceStrategy.scala:403)
  at org.apache.spark.sql.execution.datasources.DataSourceStrategy.pruneFilterProject(DataSourceStrategy.scala:321)
  at org.apache.spark.sql.execution.datasources.DataSourceStrategy.apply(DataSourceStrategy.scala:289)
  at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:63)
  at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:63)
  at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
  at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
  at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
  at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
  at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
  at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
  at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
  at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
  at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
  at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
  at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
  at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
  at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
  at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
  at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
  at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:72)
  at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:68)
  at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:77)
  at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:77)
  at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3360)
  at org.apache.spark.sql.Dataset.head(Dataset.scala:2545)
  at org.apache.spark.sql.Dataset.take(Dataset.scala:2759)
  at org.apache.spark.sql.Dataset.getRows(Dataset.scala:255)
  at org.apache.spark.sql.Dataset.showString(Dataset.scala:292)
  at org.apache.spark.sql.Dataset.show(Dataset.scala:746)
  at org.apache.spark.sql.Dataset.show(Dataset.scala:705)
  at org.apache.spark.sql.Dataset.show(Dataset.scala:714)

I checked online forums and got to know Spark 2.4 has added in built Avro source and this is the reason using databricks, we are unable to deserialize the data.

I tried two ways:

  1. Setting spark.sql.legacy.replaceDatabricksSparkAvro.enabled to true

https://spark.apache.org/docs/latest/sql-data-sources-avro.html

Exception remained the same here.

  1. Using JDBC URL Connection https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html I am getting a timeout for the connection.

Does anyone know if there is a solution to this? This will be really helpful.

Upvotes: 2

Views: 2779

Answers (1)

Laura Álvarez
Laura Álvarez

Reputation: 54

As it is said in this issue of the databricks spark-redshift connector, the library is no longer maintained as a separate project, and therefore, it does not have support for Spark 2.4.x

If you want to keep using Redshift with Spark 2.4.x, there is an alternative: the Udemy fork. With this, you have to add the Avro dependencies ("org.apache.spark" %% "spark-avro", included in Spark from version 2.4.0) as "provided" in your dependencies file and add the option --packages org.apache.spark:spark-avro_2.12:2.4.3 in the spark-submit command, as explained in the Avro Documentation.

Upvotes: 2

Related Questions