Nir
Nir

Reputation: 1724

Exception while reading file from ftp using SPARK

got below error while trying to read data from FTP using Spark.

WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.io.IOException: Seek not supported

at org.apache.hadoop.fs.ftp.FTPInputStream.seek(FTPInputStream.java:62)

at org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:62)

at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:127)

at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)

at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:245)

at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208)

at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)

at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)

at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)

at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)

at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)

at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)

at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)

at org.apache.spark.scheduler.Task.run(Task.scala:86)

at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

at java.lang.Thread.run(Thread.java:745)

Looks like FPT server doesn't support seek while Spark by default trying to split files into smaller file using seek internally.

How can I read FTP file without any issue?

Upvotes: 0

Views: 852

Answers (1)

Himanshu Parmar
Himanshu Parmar

Reputation: 437

Easiest way is to read file as a whole instead of using seek.

Below code is the answer in Java:

 String dataSource = "ftp://user:pwd/host/path/input.txt";
 sparkContext.wholeTextFiles(dataSource).values().saveAsTextFile("/Users/parmarh/git/spark-rdd-dataframe-dataset/output/ftp/");

Drawback is very slow if file is too big...

Upvotes: 1

Related Questions