Reputation: 15534
I have the following simple code in IntelliJ IDEA on my Mac:
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
object SparkGrep {
def main(args: Array[String]) {
if (args.length < 3) {
System.err.println("Usage: SparkGrep <host> <input_file> <match_term>")
System.exit(1)
}
val conf = new SparkConf().setAppName("SparkGrep").setMaster(args(0))
val sc = new SparkContext(conf)
val inputFile = sc.textFile(args(1), 2).cache()
val matchTerm : String = args(2)
val numMatches = inputFile.filter(line => line.contains(matchTerm)).count()
println("%s lines in %s contain %s".format(numMatches, args(1), matchTerm))
System.exit(0)
}
}
In my run configuration, I have added the following program arguments:
local[*] src/SparkGrep.scala val
When I run this code, I get the following error:
Exception in thread "main" org.apache.spark.SparkException: Could not parse Master URL: 'local[*]'
at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:1304)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:199)
at spark.SparkTest.SparkGrep$.main(SparkGrep.scala:26)
at spark.SparkTest.SparkGrep.main(SparkGrep.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
What can I do to overcome this error?
Upvotes: 0
Views: 5784
Reputation: 21
you should try following line
val sc = new SparkContext(conf=conf)
Upvotes: 0
Reputation: 503
After ever step let intelliJ be ready since pulls from maven can be slow sometime
Preferences > Plugins > Scala
File > New > Project
, Select Scala
on the left pane, select SBT
on the right paneOpen Module Settings
> Libraries
+
module icon > Maven
> org.apache.spark:spark-core_2.11:1.6.1
> Enter
project name
scala file
in src/main/scala
E.g. Test.scalaimport org.apache.spark.{SparkContext,SparkConf}
object Test {
def main(args: Array[String]){
val conf = new SparkConf().setAppName("DevDemo").setMaster("local")
val sc = new SparkContext(conf)
val inputFile = sc.textFile("/var/log/fsck_hfs.log").cache()
// Creates a DataFrame having a single column named "line"
val errAs = inputFile.filter(line => line.contains("ERROR"))
println("Error count : %s".format(errAs.count()))
}
}
Run Menu
> Run
Result: <<<< Snipped
16/06/13 14:39:19 INFO DAGScheduler: ResultStage 0 (count at Test.scala:14) finished in 1.258 s
16/06/13 14:39:19 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
16/06/13 14:39:19 INFO DAGScheduler: Job 0 finished: count at Test.scala:14, took 1.829030 s
Error count : 18
Upvotes: 2