Reputation: 1
We recently updated our preproduction environment from Spark 2.3 to Spark 2.4.0
Along time we have created a big amount of tables in Hive Metastore, partitioned by 2 fields one of them String and the other one BigInt.
We were reading this tables with Spark 2.3 with no problem, but after upgrading to Spark 2.4 we get the following log every time we run our SW:
log_filterBIGINT.out:
Caused by: MetaException(message:Filtering is supported only on partition keys of type string) Caused by: MetaException(message:Filtering is supported only on partition keys of type string) Caused by: MetaException(message:Filtering is supported only on partition keys of type string)
hadoop-cmf-hive-HIVEMETASTORE-isblcsmsttc0001.scisb.isban.corp.log.out.1:
2020-01-10 09:36:05,781 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-5-thread-138]: MetaException(message:Filtering is supported only on partition keys of type string)
2020-01-10 11:19:19,208 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-5-thread-187]: MetaException(message:Filtering is supported only on partition keys of type string)
2020-01-10 11:19:54,780 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-5-thread-167]: MetaException(message:Filtering is supported only on partition keys of type string)
We know the best practice from Spark point of view is to use 'STRING' type for partition columns, but we need to explore a solution we'll be able to deploy with ease, due to the big amount of tables created with a bigiint type column partition.
As a first solution we tried to set the spark.sql.hive.manageFilesourcePartitions parameter to false in the Spark Submmit, but after reruning the SW the error stood still.
Is there anyone in the community who experienced the same problem? What was the solution for it?
Upvotes: 0
Views: 2561
Reputation: 1054
spark.sql.hive.convertMetastoreOrc The above Spark SQL property is disabled in 2.3 and enabled in 2.4. Enabling the property causes the Hive table to be converted to a datasource table. I believe in this case conversion of the Hive table to Datasource table has caused the issue.
Can weset spark.sql.hive.convertMetastoreOrc=false
and run the query.
Upvotes: 1