Lvcy
Lvcy

Reputation: 11

scala code works on spark-shell but not in spark-submit

The following is the main scala code

1.val conf=new SparkConf()
2.conf.setMaster("spark://master:7077")
3.conf.setAppName("Commnity Detective")
4.val sc=new SparkContext(conf)
5.val rdd=sc.textFile("hdfs://master:9000/lvcy/test/ungraph/test.txt")
6.val maprdd=rdd.map(line =>{val p=line.split("\\s+");(p(0),p(1))}) union rdd.map( line =>{val p=line.split("\\s+");(p(1),p(0))})
7.val reducerdd=maprdd.reduceByKey((a,b)=>a+"\t"+b)
8.val reduceArray=reducerdd.collect()
9.val reducemap=reduceArray.toMap

Problem statement:

  1. copy the code(line:5-9) running on spark-shell, the result is right
  2. if put the code to the Eclipse and generate jar packages,then use "spark-submit" to submit the job, there has next error("Main:scala:21" is the top line:9, that is to say the method toMap Error,WHY?):

    Exception in thread "main" java.lang.NoSuchMethodError: scala.Predef$.$conforms()Lscala/Predef$$less$colon$less;
    at net.lvcy.main.Main$.main(Main.scala:21)
    at net.lvcy.main.Main.main(Main.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:665)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:170)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:193)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:112)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    

Upvotes: 1

Views: 1493

Answers (2)

faissalb
faissalb

Reputation: 1749

Prebuild Spark distribution is compiled with Scala 2.10, so ensure that you're running your spark cluster under scala 2.10.

Upvotes: 0

Till Rohrmann
Till Rohrmann

Reputation: 13346

It looks like a Scala version mismatch. You should make sure that the Scala version used to generate your jar is the same as the Scala version of your Spark cluster binaries, e.g. 2.10.

Upvotes: 1

Related Questions