wymeka
wymeka

Reputation: 97

Load file from Linux FS with spark submit

I'm having a hard time managing how I could load a JSON file from a Linux file system in a Spark environment. I am using Spark 1.6 by the way.

The file is located at /home/wymeka/fields.json and I am trying this command line :

spark-submit --master yarn transform.jar --schema-file "file:///home/wymeka/fields.json" --cache

The line from Main class in charge of loading this file is as follows :

val df_schema = sqlContext.read.json(pathToSchemaFile) 

Using all this, lead me to the following exception :

Caused by: java.io.FileNotFoundException: File file:/home/wymeka/fields.json does not exist
    at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:542)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:755)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:532)
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:425)
    at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:140)
    at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:341)
    at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:778)
    at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:109)
    at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
    at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:237)
    at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208)
    at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:89)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

Or when I try this command line :

spark-submit --master yarn transform.jar --schema-file "file:\/\/\/home\/imachraoui\/fields.json" --cache

I get another exception :

 java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: file:%5C/%5C/%5C/home%5C/wymeka%5C/fields.json
    at org.apache.hadoop.fs.Path.initialize(Path.java:206)
    at org.apache.hadoop.fs.Path.<init>(Path.java:172)
    at org.apache.spark.sql.execution.datasources.ResolvedDataSource$$anonfun$11.apply(ResolvedDataSource.scala:170)
    at org.apache.spark.sql.execution.datasources.ResolvedDataSource$$anonfun$11.apply(ResolvedDataSource.scala:169)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
    at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
    at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251)
    at scala.collection.mutable.ArrayOps$ofRef.flatMap(ArrayOps.scala:108)
    at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:169)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:109)
    at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:244)
    at com.nexys.spark.transform.Main$.main(Main.scala:80)
    at com.nexys.spark.transform.Main.main(Main.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
 Caused by: java.net.URISyntaxException: Relative path in absolute URI: file:%5C/%5C/%5C/home%5C/wymeka%5C/fields.json
    at java.net.URI.checkPath(URI.java:1804)
    at java.net.URI.<init>(URI.java:752)
    at org.apache.hadoop.fs.Path.initialize(Path.java:203)
    ... 24 more

Any help would be very welcome.


Edited

I tried afterwards this command line :

spark-submit --files /home/wymeka/fields.json --master yarn transform.jar --schema-file "fields.json" --cache

and thus changed my spark code as below :

val df_schema = sqlContext.read.json(SparkFiles.getRootDirectory()+"/"+pathToSchemaFile)

But still nothing !

Upvotes: 0

Views: 965

Answers (2)

Sayat Satybald
Sayat Satybald

Reputation: 6580

You are submitting an application to the YARN cluster(--master yarn parameter). Spark expects that the file you specified is available locally by the path: /home/wymeka/fields.json on the cluster.

To run the program locally you should change spark-submit parameter.

--master local[*] 

or specify proper hdfs location if you want to deploy to the YARN cluster.

Launching Applications with spark-submit

Upvotes: 0

SanthoshPrasad
SanthoshPrasad

Reputation: 1175

That file should be available on all worker nodes in the same path, otherwise it should be HDFS path

Upvotes: 1

Related Questions