rotsner
rotsner

Reputation: 692

Spark saveAsTextFile() results in Mkdirs failed to create for half of the directory

I am currently running a Java Spark Application in tomcat and receiving the following exception:

Caused by: java.io.IOException: Mkdirs failed to create file:/opt/folder/tmp/file.json/_temporary/0/_temporary/attempt_201603031703_0001_m_000000_5

on the line

text.saveAsTextFile("/opt/folder/tmp/file.json") //where text is a JavaRDD<String>

The issue is that /opt/folder/tmp/ already exists and successfully creates up to /opt/folder/tmp/file.json/_temporary/0/ and then it runs into what looks like a permission issue with the remaining part of the path _temporary/attempt_201603031703_0001_m_000000_5 itself, but I gave the tomcat user permissions (chown -R tomcat:tomcat tmp/ and chmod -R 755 tmp/) to the tmp/ directory. Does anyone know what could be happening?

Thanks

Edit for @javadba:

[root@ip tmp]# ls -lrta 
total 12
drwxr-xr-x 4 tomcat tomcat 4096 Mar  3 16:44 ..
drwxr-xr-x 3 tomcat tomcat 4096 Mar  7 20:01 file.json
drwxrwxrwx 3 tomcat tomcat 4096 Mar  7 20:01 .

[root@ip tmp]# cd file.json/
[root@ip file.json]# ls -lrta 
total 12
drwxr-xr-x 3 tomcat tomcat 4096 Mar  7 20:01 _temporary
drwxrwxrwx 3 tomcat tomcat 4096 Mar  7 20:01 ..
drwxr-xr-x 3 tomcat tomcat 4096 Mar  7 20:01 .

[root@ip file.json]# cd _temporary/
[root@ip _temporary]# ls -lrta 
total 12
drwxr-xr-x 2 tomcat tomcat 4096 Mar  7 20:01 0
drwxr-xr-x 3 tomcat tomcat 4096 Mar  7 20:01 ..
drwxr-xr-x 3 tomcat tomcat 4096 Mar  7 20:01 .

[root@ip _temporary]# cd 0/
[root@ip 0]# ls -lrta 
total 8
drwxr-xr-x 3 tomcat tomcat 4096 Mar  7 20:01 ..
drwxr-xr-x 2 tomcat tomcat 4096 Mar  7 20:01 .

The exception in catalina.out

Caused by: java.io.IOException: Mkdirs failed to create file:/opt/folder/tmp/file.json/_temporary/0/_temporary/attempt_201603072001_0001_m_000000_5
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:438)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:424)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:799)
    at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:123)
    at org.apache.spark.SparkHadoopWriter.open(SparkHadoopWriter.scala:91)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1193)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:89)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    ... 1 more

Upvotes: 14

Views: 21636

Answers (11)

Jerrold110
Jerrold110

Reputation: 199

A solution that works for small files can be to convert the pyspark dataframe into a Pandas dataframe and then export it as a csv df.toPandas.to_csv(myDf.csv), but this is not feasible because of memory limitations and performance overhead of pandas dataframes.

Upvotes: 0

narayana kandukuri
narayana kandukuri

Reputation: 63

If the transformed data fits into master node memory, convert pyspark dataframe to pandas dataframe and then save the pandas dataframe to file.

Upvotes: 0

codergirrl
codergirrl

Reputation: 155

Giving the full path works for me. Example:

 file:/Users/yourname/Documents/electric-chargepoint-2017-data

Upvotes: 1

swapnil shashank
swapnil shashank

Reputation: 997

We need to run the application in local mode.

 val spark = SparkSession
      .builder()
      .config("spark.master", "local")
      .appName("applicationName")
      .getOrCreate()

Upvotes: 0

Gajendra Chavan
Gajendra Chavan

Reputation: 81

I also had the same problem, and my issue has been resolved by using full HDFS path:

Error

Caused by: java.io.IOException: Mkdirs failed to create file:/QA/Gajendra/SparkAutomation/Source/_temporary/0/_temporary/attempt_20180616221100_0002_m_000000_0 (exists=false, cwd=file:/home/gajendra/LiClipse Workspace/SpakAggAutomation)

Solution

Use full HDFS path with hdfs://localhost:54310/<filePath>

hdfs://localhost:54310/QA/Gajendra/SparkAutomation

Upvotes: 2

Ladislav Zitka
Ladislav Zitka

Reputation: 1000

this is tricky one, but simple to solve. One must configure job.local.dir variable to point to working directory. Following code works fine with writing CSV file:

def xmlConvert(spark):
    etl_time = time.time()
    df = spark.read.format('com.databricks.spark.xml').options(rowTag='HistoricalTextData').load(
        '/home/zangetsu/proj/prometheus-core/demo/demo-1-iot-predictive-maintainance/dataset/train/')
    df = df.withColumn("TimeStamp", df["TimeStamp"].cast("timestamp")).groupBy("TimeStamp").pivot("TagName").sum(
        "TagValue").na.fill(0)
    df.repartition(1).write.csv(
        path="/home/zangetsu/proj/prometheus-core/demo/demo-1-iot-predictive-maintainance/result/",
        mode="overwrite",
        header=True,
        sep=",")
    print("Time taken to do xml transformation: --- %s seconds ---" % (time.time() - etl_time))


if __name__ == '__main__':
    spark = SparkSession \
        .builder \
        .appName('XML ETL') \
        .master("local[*]") \
        .config('job.local.dir', '/home/zangetsu/proj/prometheus-core/demo/demo-1-iot-predictive-maintainance') \
        .config('spark.driver.memory','64g') \
        .config('spark.debug.maxToStringFields','200') \
        .config('spark.jars.packages', 'com.databricks:spark-xml_2.11:0.5.0') \
        .getOrCreate()

    print('Session created')

    try:
        xmlConvert(spark)

    finally:
        spark.stop()

Upvotes: -1

Gravity
Gravity

Reputation: 229

I have the same issue as yours.

I also did not want to write to hdfs but to a local memory share.

After some research, I found that for my case the reason is: there are several nodes executing, however, some of the nodes has no access to the directory where you want to write your data.

Thus the solution is to make the directory available to all nodes, and then it works~

Upvotes: 0

Pim Witlox
Pim Witlox

Reputation: 9

So, I've been experiencing the same issue, with my setup there is no HDFS and Spark is running in stand-alone mode. I haven't been able to save spark dataframes to an NFS share using the native Spark methods. The process runs as a local user, and I try to write to the users home folder. Even when creating a subfolder with 777 I cannot write to the folder.

The workaround for this is to convert the dataframe with toPandas() and after that to_csv(). This magically works.

Upvotes: 0

Piotr Kołaczkowski
Piotr Kołaczkowski

Reputation: 2629

saveAsTextFile is really processed by Spark executors. Depending on your Spark setup, Spark executors may run as a different user than your Spark application driver. I guess the spark application driver prepares the directory for the job fine, but then the executors running as a different user have no rights to write in that directory.

Changing to 777 won't help, because permissions are not inherited by child dirs, so you'd get 755 anyways.

Try running your Spark application as the same user that runs your Spark.

Upvotes: 20

dagbj
dagbj

Reputation: 61

Could it be selinux/apparmor that plays you a trick? Check with ls -Z and system logs.

Upvotes: 1

WestCoastProjects
WestCoastProjects

Reputation: 63201

I suggest to try changing to 777 temporarily . See if it works at that point. There have been bugs/issues wrt permissions on local file system. If that still does not work let us know if anything changed or precisely same result.

Upvotes: 3

Related Questions