Reputation: 2825
I have written a simple application and it is running fine but when I am submitting the code via spark-submit even after close() is called the spark-submit session is not finishing and I need to kill the PID.
Below is the code snippet
object FaultApp {
case class Person(name: String, age: Long)
def main(args: Array[String]):Unit = {
val spark = SparkSession
.builder
.enableHiveSupport()
.config("spark.scheduler.mode", "FAIR")
.appName("parjobs")
.getOrCreate()
import spark.implicits._
val pool = Executors.newFixedThreadPool(5)
// create the implicit ExecutionContext based on our thread pool
implicit val xc = ExecutionContext.fromExecutorService(pool)
import Function._
val caseClass = Seq(Person("X", 32)
,Person("Y", 37)
,Person("Z", 37)
,Person("A", 6)
)
val caseClassDS = caseClass.toDF()
val taskA = write_first(caseClassDS)
Await.result(Future.sequence(Seq(taskA)), Duration(1, MINUTES))
spark.stop()
println("After Spark Stop command")
}
}
object Function {
def write_first (ds : DataFrame)(implicit xc: ExecutionContext) = Future {
Thread.sleep(10000)
ds.write.format("orc").mode("overwrite")
.option("compression", "zlib")
.saveAsTable("save_1")
}
}
I am submitting the job using the below command
spark-submit --master yarn --deploy-mode client fault_application-assembly-1.0-SNAPSHOT.jar --executor-memory 1G --executor-cores 2 --driver-memory 1G
Below are the last few lines from the log
18/04/18 15:15:20 INFO SchedulerExtensionServices: Stopping
SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)
18/04/18 15:15:20 INFO YarnClientSchedulerBackend: Stopped
18/04/18 15:15:20 INFO MapOutputTrackerMasterEndpoint:
MapOutputTrackerMasterEndpoint stopped!
18/04/18 15:15:20 INFO MemoryStore: MemoryStore cleared
18/04/18 15:15:20 INFO BlockManager: BlockManager stopped
18/04/18 15:15:20 INFO BlockManagerMaster: BlockManagerMaster stopped
18/04/18 15:15:20 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
18/04/18 15:15:20 INFO SparkContext: Successfully stopped SparkContext
After Spark Stop command
Any help or advice will be greatly appreciated.
Upvotes: 1
Views: 4344
Reputation: 5213
That's because you're creating an execution context with a threadpool, so your program won't shut down until that is also shutdown.
After spark.stop()
, add
xc.shutdown()
println("After shutdown.")
In alternative, instead of creating a new execution context for your futures, you could just use the global one:
implicit val executor = scala.concurrent.ExecutionContext.global
Upvotes: 3