Reputation: 852
I run my spark application in yarn cluster. In my code I use number available cores of queue for creating partitions on my dataset:
Dataset ds = ...
ds.coalesce(config.getNumberOfCores());
My question: how can I get number available cores of queue by programmatically way and not by configuration?
Upvotes: 16
Views: 25216
Reputation: 31
For all of those that aren't using yarn clusters: If you are doing it in Python/Databricks here is a function I wrote that will help solve the opportunity. This will get you both the number of worker nodes as well as the number of CPU's and return the multiplied final CPU count of your worker distribution.
def GetDistCPUCount():
nWorkers = int(spark.sparkContext.getConf().get('spark.databricks.clusterUsageTags.clusterTargetWorkers'))
GetType = spark.sparkContext.getConf().get('spark.databricks.clusterUsageTags.clusterNodeType')
GetSubString = pd.Series(GetType).str.split(pat = '_', expand = True)
GetNumber = GetSubString[1].str.extract('(\d+)')
ParseOutString = GetNumber.iloc[0,0]
WorkerCPUs = int(ParseOutString)
nCPUs = nWorkers * WorkerCPUs
return nCPUs
Upvotes: 1
Reputation: 9026
You could run jobs on every machine and ask it for the number of cores, but that's not necessarily what's available for Spark (as pointed out by @tribbloid in a comment on another answer):
import spark.implicits._
import scala.collection.JavaConverters._
import sys.process._
val procs = (1 to 1000).toDF.map(_ => "hostname".!!.trim -> java.lang.Runtime.getRuntime.availableProcessors).collectAsList().asScala.toMap
val nCpus = procs.values.sum
Running it in the shell (on a tiny test cluster with two workers) gives:
scala> :paste
// Entering paste mode (ctrl-D to finish)
import spark.implicits._
import scala.collection.JavaConverters._
import sys.process._
val procs = (1 to 1000).toDF.map(_ => "hostname".!!.trim -> java.lang.Runtime.getRuntime.availableProcessors).collectAsList().asScala.toMap
val nCpus = procs.values.sum
// Exiting paste mode, now interpreting.
import spark.implicits._
import scala.collection.JavaConverters._
import sys.process._
procs: scala.collection.immutable.Map[String,Int] = Map(ip-172-31-76-201.ec2.internal -> 2, ip-172-31-74-242.ec2.internal -> 2)
nCpus: Int = 4
Add zeros to your range if you typically have lots of machines in your cluster. Even on my two-machine cluster 10000 completes in a couple seconds.
This is probably only useful if you want more information than sc.defaultParallelism() will give you (as in @SteveC 's answer)
Upvotes: 1
Reputation: 1095
According to Databricks if the driver and executors are of the same node type, this is the way to go:
java.lang.Runtime.getRuntime.availableProcessors * (sc.statusTracker.getExecutorInfos.length -1)
Upvotes: 1
Reputation: 19445
Found this while looking for the answer to pretty much the same question.
I found that:
Dataset ds = ...
ds.coalesce(sc.defaultParallelism());
does exactly what the OP was looking for.
For example, my 5 node x 8 core cluster returns 40 for the defaultParallelism
.
Upvotes: 8
Reputation: 13528
There are ways to get both the number of executors and the number of cores in a cluster from Spark. Here is a bit of Scala utility code that I've used in the past. You should easily be able to adapt it to Java. There are two key ideas:
The number of workers is the number of executors minus one or sc.getExecutorStorageStatus.length - 1
.
The number of cores per worker can be obtained by executing java.lang.Runtime.getRuntime.availableProcessors
on a worker.
The rest of the code is boilerplate for adding convenience methods to SparkContext
using Scala implicits. I wrote the code for 1.x years ago, which is why it is not using SparkSession
.
One final point: it is often a good idea to coalesce to a multiple of your cores as this can improve performance in the case of skewed data. In practice, I use anywhere between 1.5x and 4x, depending on the size of data and whether the job is running on a shared cluster or not.
import org.apache.spark.SparkContext
import scala.language.implicitConversions
class RichSparkContext(val sc: SparkContext) {
def executorCount: Int =
sc.getExecutorStorageStatus.length - 1 // one is the driver
def coresPerExecutor: Int =
RichSparkContext.coresPerExecutor(sc)
def coreCount: Int =
executorCount * coresPerExecutor
def coreCount(coresPerExecutor: Int): Int =
executorCount * coresPerExecutor
}
object RichSparkContext {
trait Enrichment {
implicit def enrichMetadata(sc: SparkContext): RichSparkContext =
new RichSparkContext(sc)
}
object implicits extends Enrichment
private var _coresPerExecutor: Int = 0
def coresPerExecutor(sc: SparkContext): Int =
synchronized {
if (_coresPerExecutor == 0)
sc.range(0, 1).map(_ => java.lang.Runtime.getRuntime.availableProcessors).collect.head
else _coresPerExecutor
}
}
Update
Recently, getExecutorStorageStatus
has been removed. We have switched to using SparkEnv
's blockManager.master.getStorageStatus.length - 1
(the minus one is for the driver again). The normal way to get to it, via env
of SparkContext
is not accessible outside of the org.apache.spark
package. Therefore, we use an encapsulation violation pattern:
package org.apache.spark
object EncapsulationViolator {
def sparkEnv(sc: SparkContext): SparkEnv = sc.env
}
Upvotes: 20