Eugene Goldberg
Eugene Goldberg

Reputation: 15534

how to overcome spark "cannot parse master URL" error?

I have the following simple code in IntelliJ IDEA on my Mac:

import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf


object SparkGrep {
  def main(args: Array[String]) {
    if (args.length < 3) {
      System.err.println("Usage: SparkGrep <host> <input_file> <match_term>")
      System.exit(1)
    }
    val conf = new SparkConf().setAppName("SparkGrep").setMaster(args(0))
    val sc = new SparkContext(conf)
    val inputFile = sc.textFile(args(1), 2).cache()
    val matchTerm : String = args(2)
    val numMatches = inputFile.filter(line => line.contains(matchTerm)).count()
    println("%s lines in %s contain %s".format(numMatches, args(1), matchTerm))
    System.exit(0)
  }
}

In my run configuration, I have added the following program arguments:

local[*] src/SparkGrep.scala val

When I run this code, I get the following error:

Exception in thread "main" org.apache.spark.SparkException: Could not parse Master URL: 'local[*]'
    at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:1304)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:199)
    at spark.SparkTest.SparkGrep$.main(SparkGrep.scala:26)
    at spark.SparkTest.SparkGrep.main(SparkGrep.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)

What can I do to overcome this error?

Upvotes: 0

Views: 5784

Answers (2)

Nilesh Varshney
Nilesh Varshney

Reputation: 21

you should try following line

val sc = new SparkContext(conf=conf)

Upvotes: 0

prateek05
prateek05

Reputation: 503

InetlliJ IDEA + MAC + SPARK

After ever step let intelliJ be ready since pulls from maven can be slow sometime

IntelliJ Setup

  1. Install Scala plugin from Preferences > Plugins > Scala
  2. File > New > Project, Select Scala on the left pane, select SBT on the right pane
  3. Right click on the projects name > Open Module Settings > Libraries
  4. Press the + module icon > Maven > org.apache.spark:spark-core_2.11:1.6.1 > Enter
  5. Add the library to the project name
  6. The Spark Library should appear under the External Library section
  7. New scala file in src/main/scala E.g. Test.scala

Test.scala

import org.apache.spark.{SparkContext,SparkConf}

object Test {
 def main(args: Array[String]){
 val conf = new SparkConf().setAppName("DevDemo").setMaster("local")
 val sc = new SparkContext(conf)
 val inputFile = sc.textFile("/var/log/fsck_hfs.log").cache()
// Creates a DataFrame having a single column named "line"
 val errAs = inputFile.filter(line => line.contains("ERROR"))
 println("Error count : %s".format(errAs.count()))
 }
}

IntelliJ

Run Menu > Run

Result: <<<< Snipped

16/06/13 14:39:19 INFO DAGScheduler: ResultStage 0 (count at Test.scala:14) finished in 1.258 s
16/06/13 14:39:19 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
16/06/13 14:39:19 INFO DAGScheduler: Job 0 finished: count at Test.scala:14, took 1.829030 s
Error count : 18

Upvotes: 2

Related Questions