Reputation: 207
I am trying use Spark engine in my Hive query.
It is an old query, and I don't want to convert the whole code to a spark job.
But when I run the query, it gives following error:
Status: Failed
FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask
The only thing I have changed is the execution engine:
set hive.execution.engine=spark;
The above change works for other similar queries. So I don't think that it's a configuration issue...
Or am I not aware that it is?
Has anybody faced this issue before?
Upvotes: 1
Views: 3726
Reputation: 352
execute below command in hive client with hiveserver2 jdbc connection:
set hive.auto.convert.join=false;
It works for me.
Here is detail reason: https://www.cnblogs.com/CYan521/p/16716361.html
Upvotes: 0
Reputation: 320
use verbose mode of beeline to run the query. check query exeption logs, hiveserver logs, spark logs and spark webui worker logs (this often has the exact stack trace). Try running spark in local mode.
What versions of hive, spark, hadoop do u use?
Upvotes: 0
Reputation: 128
Check the logs of the job to see the true error. Return code 1, 2 and 3 are all generic errors in both MR and Spark.
Upvotes: 1