summary
summary

Reputation: 125

Output file is getting generated on slave machine in apache spark

I am facing some issue while running a spark java program that reads a file, do some manipulation and then generates output file at a given path. Every thing works fine when master and slaves are on same machine .ie: in Standalone-cluster mode. But problem started when I deployed same program in multi machine multi node cluster set up. That means the master is running at x.x.x.102 and slave is running on x.x.x.104. Both the master -slave have shared their SSH keys and are reachable from each other.

Initially slave was not able to read input file , for that I came to know I need to call sc.addFile() before sc.textFile(). that solved issue. But now I see output is being generated on slave machine in a _temporary folder under the output path. ie: /tmp/emi/_temporary/0/task-xxxx/part-00000 In local cluster mode it works fine and generates output file in /tmp/emi/part-00000.

I came to know that i need to use SparkFiles.get(). but i am not able to understand how and where to use this method.

till now I am using

DataFrame dataobj = ...
 dataObj.javaRDD().coalesce(1).saveAsTextFile("file:/tmp/emi");

Can any one please let me know how to call SparkFiles.get()?

In short how can I tell slave to create output file in the machine where driver is running?

Please help.

Thanks a lot in advance.

Upvotes: 3

Views: 1250

Answers (1)

zero323
zero323

Reputation: 330303

There is nothing unexpected here. Each worker writes its own part of the data separately. Using file scheme only means that data is writer to a file in the file system local from the worker perspective.

Regarding SparkFiles it is not applicable in this particular case. SparkFiles can be used to distribute common files to the worker machines not to deal with the results.

If for some reason you want to perform writes on the machine used to run driver code you'll have to fetch data to the driver machine first (either collect which requires enough memory to fit all data or toLocalIterator which collects partition at the time and requires multiple jobs) and use standard tools to write results to local file system. In general though writing to driver is not a good practice and most of the time is simply useless.

Upvotes: 1

Related Questions