Geetha Ponnusamy
Geetha Ponnusamy

Reputation: 497

spark standalone cluster - privilege issue in creating output file

I'm working with spark standalone cluster (Linux platform and python application), using nfs to share files across master and worker machines. I'm trying with a master and one worker machine and able to submit application, it runs in worker machine but it fails to create output file (using spark saveAsTextFile), throws mkdir failed error. In both master and worker machine the nfs directory have the permission to create and delete files and I'm able to create it manually but when spark tries to create files inside that directory it creates the temporary folders ( _temporary and 0 folders) but fails to create part files. I have tried using "chmod -R 777", but still it fails. Is there any way to make it work!

Thanks in advance

Upvotes: 2

Views: 469

Answers (1)

Mohamed Thasin ah
Mohamed Thasin ah

Reputation: 11192

It seems to be a privilege issue. When you Create a directory in NFS you have to provide directory privilege using

chown username:groupname path-of-the-NFS-directory

Then you must run spark application using privileged user or group.

If you still facing that issue you could try these

chgrp group name path-of-the-NFS-directory

And then try,

chmod 777 path-of-the-NFS-directory

This will work.

Upvotes: 1

Related Questions