Reputation: 2977
Working with Spark
configured with Yarn
(in client
mode, though not much relevant to the question), I found that some of my Executors
are failing.
The Executor
, which is a Yarn-Container
, has its individual log file at: /var/log/hadoop-yarn/containers/containerID
. Some of the (critical) events/logs generated by the container percolates to the driver, but not all of them. It is observed that when an Executor
fails, its log file
is cleared up as soon as it dies. Is there any way to keep these logs from getting deleted for debug purposes?
Upvotes: 3
Views: 1617
Reputation: 7138
Since, you have spark on yarn, I hope this would help to gather all the logs
yarn logs -applicationId <application ID>
Upvotes: 1