Reputation: 33
I have a Spark Cluster with Yarn as resource manager with 4 identical workers each with 16GB RAM. and 4 cores CPU. These are the properties used while creating the cluster
--properties yarn:yarn.nodemanager.resource.memory-mb=13544 \
--properties yarn:yarn.scheduler.maximum-allocation-mb=2048 \
--properties yarn:yarn.scheduler.capacity.maximum-am-resource-percent=0.95 \
--properties spark:spark.driver.memory=1024m \
--properties spark:spark.driver.cores=1 \
Yet on the Yarn Scheduler UI it shows Configured Max Application Master Limit: 25.0
How can this be changed ?
While I submit more applications, the job gets accepted but not started.
Application is added to the scheduler and is not yet activated. Queue's AM resource limit exceeded.
I have looked for documentation of both yarn and spark; I have tried these properties:
yarn.scheduler.capacity.maximum-applications
yarn.scheduler.capacity.root.maximum-applications
yarn.scheduler.capacity.root.maximum-capacity
yarn.scheduler.capacity.root.capacity
But the limit stays at 25.0. How can this be changed? Any pointers are appreciated.
Upvotes: 1
Views: 155