Vinay K L
Vinay K L

Reputation: 45

ERROR : FAILED: Error in acquiring locks: Error communicating with the metastore org.apache.hadoop.hive.ql.lockmgr.LockException

Getting the Error in acquiring locks, when trying to run count(*) on partitioned tables. The table has 365 partitions when filtered on <= 350 partitions, the queries are working fine. when tried to include more partitions for the query, it's failing with the error.

working on Hive-managed ACID tables, with the following default values

Tried increasing/decreasing values for these following with a beeline session.

Using the HDI-4.0 interactive-query-llap cluster, the meta-store is backed by default sql-server provided along.

Upvotes: 0

Views: 9132

Answers (3)

Umer
Umer

Reputation: 25

We faced the same issue in HDINSIGHT. We solved it by upgrading the metastore. The Default metastore had only 5 DTU which is not recommended for production environments. So we migrated to custom Metastore and spin the Azure SQL SERVER (P2 above 250 DTUs) and the setting the below properties:

hive.direct.sql.max.elements.values.clause=200
hive.direct.sql.max.elements.in.clause=200

Above values are set because SQL SERVER cannot process more than 2100 parameter. When you have partitions more than 348, you faced this issue as 1 partition creates 8 parameters for metastore 8 x 348

Upvotes: 0

Jerry Yang
Jerry Yang

Reputation: 31

The problem is NOT due to service tier of the hive metastore database. It is most probably due to too many partitions in one query based on the symptom. I meet the same issue several times. In the hivemetastore.log, you shall able to see such error:

metastore.RetryingHMSHandler: MetaException(message:Unable to update transaction database com.microsoft.sqlserver.jdbc.SQLServerException: The incoming request has too many parameters. The server supports a maximum of 2100 parameters. Reduce the number of parameters and resend the request.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:254)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1608)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:578)

This is due to in Hive metastore, each partition involved in the hive query requires at most 8 parameters to acquire a lock.

Some possible workarounds:

  • Decompose the the query into multiple sub-queries to read from fewer partitions.

  • Reduce the number of partitions by setting different partition keys.

  • Remove partitioning if partition keys don't have any filters.

Following are the parameters which manage the batch size for INSERT query generated by the direct SQL. Their default value is 1000. Set both of them to 100 (as a good starting point) in the Custom hive-site section of Hive configs via. Ambari and restart ALL Hive related components (including Hive metastore).

hive.direct.sql.max.elements.values.clause=100 hive.direct.sql.max.elements.in.clause=100

Upvotes: 3

Bathiya Priyadarshana
Bathiya Priyadarshana

Reputation: 1405

We also faced the same error in HDInsight and after doing many configuration changes similar to what you have done, the only thing that worked is scaling our Hive Metastore SQL DB server.

We had to scale it all the way to a P2 tier with 250 DTUs for our workloads to work without these Lock Exceptions. As you may know, with the tier and DTU count, the SQL server's IOPS and response time improves thus we suspected that the Metastore performance was the root cause for these Lock Exceptions with the increase in workloads.

Following link provides information about the DTU based performance variation in SQL servers in Azure.

https://learn.microsoft.com/en-us/azure/sql-database/sql-database-service-tiers-dtu

Additionally as I know, the default Hive metastore that gets provisioned when you opt to not provide an external DB in cluster creation is just an S1 tier DB. This would not be suitable for any high capacity workloads. At the same time, as a best practice always provision your metastores external to the cluster and attach at cluster provisioning time, as this gives you the flexibility to connect the same Metastore to multiple clusters (so that your Hive layer schema can be shared across multiple clusters, e.g. Hadoop for ETLs and Spark for Processing / Machine Learning), and you have the full control to scale up or down your metastore as per your need anytime.

The only way to scale the default metastore is by engaging the Microsoft support.

Upvotes: 1

Related Questions