Shubham Sahay
Shubham Sahay

Reputation: 88

Spark-Scala writing the output in a textfile

I am executing the wordcount program in spark and trying to store the result in a text file.

I have a scala script to count the word as SparkWordCount.scala. I am trying to execute the script from Spark console as below.

scala> :load /opt/spark-2.0.2-bin-hadoop2.7/bin/SparkWordCount.scala
Loading /opt/spark-2.0.2-bin-hadoop2.7/bin/SparkWordCount.scala...
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark._
defined object SparkWordCount

scala>

after the program is exectued i am getting the message as "defined object SparkWordCount" but I am not able to see the output result in the text file.

My Word count program is below.

import org.apache.spark.SparkContext 
import org.apache.spark.SparkContext._ 
import org.apache.spark._  

object SparkWordCount { 
   def main(args: Array[String]) { 

      val sc = new SparkContext( "local", "Word Count", "/opt/spark-2.0.2-bin-hadoop2.7",/opt/spark-2.0.2-bin-hadoop2.7/jars,map())



      val input = sc.textFile("demo.txt") 


      val count = input.flatMap(line ⇒ line.split(" ")).map(word ⇒ (word, 1)).reduceByKey(_ + _) 
      count.saveAsTextFile("outfile") 

   } 
}

Please can anyone suggest. Thanks.

Upvotes: 0

Views: 940

Answers (1)

vindev
vindev

Reputation: 2280

Once object is defined you can call the method to execute your code. Spark-shell won't execute the main method automatically. In your case you can use SparkWordCount.main(Array()) to execute your word-count program.

Upvotes: 1

Related Questions