John Simon
John Simon

Reputation: 826

Failed to create spark client: Hive on spark exception

I have changed my hive execution engine to SPARK. when doing any DML/DDL I am getting below exception.

hive> select count(*) from tablename;
Query ID = jibi_john_20160602153012_6ec1da36-dcb3-4f2f-a855-3b68be118b36
Total jobs = 1
Launching Job 1 out of 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
**Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark client.)'
**FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask****
 

Upvotes: 1

Views: 3344

Answers (2)

Puneet Gupta
Puneet Gupta

Reputation: 31

It may be due to memory issue. Try set the YARN Container memory and maximum to be greater than Spark Executor Memory + Overhead.

yarn.scheduler.maximum-allocation-mb yarn.nodemanager.resource.memory-mb

Upvotes: 0

Paul Back
Paul Back

Reputation: 1319

One possible cause is that you are hitting a timeout value before YARN assigns an ApplicationMaster. You can extend this timeout value by setting hive.spark.client.server.connect.timeout

Its default value is 90000ms.

Upvotes: 1

Related Questions