KOUSIK MANDAL
KOUSIK MANDAL

Reputation: 2052

Can not start a job from java code in spark; Initial job has not accepted any resource

Hello my Spark configuration in JAVA is :

ss=SparkSession.builder()
    .config("spark.driver.host", "192.168.0.103")
    .config("spark.driver.port", "4040")
    .config("spark.dynamicAllocation.enabled", "false")
    .config("spark.cores.max","1")
    .config("spark.executor.memory","471859200")
    .config("spark.executor.cores","1")
    //.master("local[*]")
    .master("spark://kousik-pc:7077")
    .appName("abc")
    .getOrCreate();

Now when I am submitting any job from inside code(not submitting jar) I am getting the Warning:

TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

The spark UI is enter image description here

The worker that is in the screenshot is started from command:

~/spark/sbin/start-slave.sh

All the four jobs those are in waiting stage is submitted from java code. Tried all solutions from all sites. Any idea please.

Upvotes: 0

Views: 1068

Answers (1)

Prasad Khode
Prasad Khode

Reputation: 6739

As per my understanding, you wanted to run a spark job using only one executor core, you don't have to specify spark.executor.cores.

spark.cores.max should handle assigning only one core to each job as its value is 1.

Its always good practice to provide the configuration details like master, executor memory/cores in spark-submit command like below:

./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master spark://xxx.xxx.xxx.xxx:7077 \
  --executor-memory 20G \
  --total-executor-cores 100 \
  /path/to/examples.jar \
  1000

In case if you want to explicitly specify the number of executors to each job use --total-executor-cores in your spark-submit command

Check the documentation here

Upvotes: 1

Related Questions