Omid Ebrahimi
Omid Ebrahimi

Reputation: 1180

TaskSchedulerImpl: Initial job has not accepted any resources. (Error in Spark)

I'm trying to run SparkPi example on my standalone mode cluster.

package org.apache.spark.examples
import scala.math.random
import org.apache.spark._

/** Computes an approximation to pi */
object SparkPi {
  def main(args: Array[String]) {
    val conf = new SparkConf().setAppName("SparkPi")
      .setMaster("spark://192.168.17.129:7077")
      .set("spark.driver.allowMultipleContexts", "true")
    val spark = new SparkContext(conf)
    val slices = if (args.length > 0) args(0).toInt else 2
    val n = math.min(100000L * slices, Int.MaxValue).toInt // avoid overflow
    val count = spark.parallelize(1 until n, slices).map { i =>
    val x = random * 2 - 1
    val y = random * 2 - 1
    if (x*x + y*y < 1) 1 else 0
    }.reduce(_ + _)
    println("Pi is roughly " + 4.0 * count / n)
    spark.stop()
    }
}

Note: I made a little change in this line:

val conf = new SparkConf().setAppName("SparkPi")
  .setMaster("spark://192.168.17.129:7077")
  .set("spark.driver.allowMultipleContexts", "true")

Problem: I'm using spark-shell (Scala interface) to run this code. When I try this code, I receive this error repeatedly:

15/02/09 06:39:23 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory

Note: I can see my workers in my Master's WebUI and also I can see a new job in the Running Applications section. But there is no end for this application and I see error repeatedly.

What is the problem?

Thanks

Upvotes: 2

Views: 2022

Answers (1)

pzecevic
pzecevic

Reputation: 2857

If you want to run this from spark shell, then start the shell with argument --master spark://192.168.17.129:7077 and enter the following code:

import scala.math.random
import org.apache.spark._
val slices = 10
val n = math.min(100000L * slices, Int.MaxValue).toInt // avoid overflow
val count = sc.parallelize(1 until n, slices).map { i =>
    val x = random * 2 - 1
    val y = random * 2 - 1
    if (x*x + y*y < 1) 1 else 0
}.reduce(_ + _)
println("Pi is roughly " + 4.0 * count / n)

Otherwise, compile the code into a jar and run it with spark-submit. But remove setMaster from the code and add it as 'master' argument to spark-submit script. Also remove the allowMultipleContexts argument from the code.

You need only one spark context.

Upvotes: 1

Related Questions