Reputation: 3071
Is there a way to pass hbase.rpc.timeout to a spark job , which is called through a shell script. I know we can set the hbase.rpc.timeout value when creating the HBaseConfiguration in the spark job it self. But I want to pass the value from shell .
Something like :
${SPARK_SUBMIT}
--class mySpark \
--num-executors ${NUM_EXECUTORS} \
--master yarn-cluster \
--deploy-mode cluster \
--hbase.rpc.timeout 600000 . \
${SPARK_JAR} "${START_TIME}" "${END_TIME}" "${OUTPUT_PATH}" 2>&1 | tee -a ${logPath}
Upvotes: 1
Views: 236
Reputation: 71
there are two methods
regrad hbase.rpc.timeout 600000
as the application arguments. And process it in your SPARK_JAR
like what you want --conf hbase.rpc.timeout=600000
. Then sparkContext.getConf().get("hbase.rpc.timeout")
Upvotes: 1