Jimmy Solano
Jimmy Solano

Reputation: 11

Spark: Error on writing DataFrame

I'm trying to write a DataFrame as json format, however an error keeps coming up (it doesn't matter what format I choose):

My code:

var finalDF = spark_session.createDataFrame(d, schema)
finalDF.show(10, false)
finalDF.write.mode("overwrite").json("test/df.json")

The show method prints the expected result, but when it's about to write it throws this error:

    ExitCodeException exitCode=-1073741515: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:575)
    at org.apache.hadoop.util.Shell.run(Shell.java:478)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:766)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:859)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:842)
    at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:661)
    at org.apache.hadoop.fs.ChecksumFileSystem$1.apply(ChecksumFileSystem.java:501)
    at org.apache.hadoop.fs.ChecksumFileSystem$FsOperation.run(ChecksumFileSystem.java:482)
    at org.apache.hadoop.fs.ChecksumFileSystem.setPermission(ChecksumFileSystem.java:498)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:467)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:433)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
    at org.apache.spark.sql.execution.datasources.CodecStreams$.createOutputStream(CodecStreams.scala:81)
    at org.apache.spark.sql.execution.datasources.CodecStreams$.createOutputStreamWriter(CodecStreams.scala:92)
    at org.apache.spark.sql.execution.datasources.json.JsonOutputWriter.<init>(JsonFileFormat.scala:140)
    at org.apache.spark.sql.execution.datasources.json.JsonFileFormat$$anon$1.newInstance(JsonFileFormat.scala:80)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.newOutputWriter(FileFormatWriter.scala:305)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:314)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:258)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:256)
    at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1375)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:261)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:191)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:190)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:108)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
18/05/16 17:09:48 WARN FileUtil: Failed to delete file or dir [C:\Users\jsolano\IdeaProjects\Test2\test\df.json\_temporary\0\_temporary\attempt_20180516170948_0005_m_000000_0\.part-00000-ff4d215c-00f2-4585-89bb-d53426315539-c000.json.crc]: it still exists.
18/05/16 17:09:48 WARN FileUtil: Failed to delete file or dir [C:\Users\jsolano\IdeaProjects\Test2\test\df.json\_temporary\0\_temporary\attempt_20180516170948_0005_m_000000_0\part-00000-ff4d215c-00f2-4585-89bb-d53426315539-c000.json]: it still exists.
18/05/16 17:09:48 WARN FileOutputCommitter: Could not delete file:/C:/Users/jsolano/IdeaProjects/Test2/test/df.json/_temporary/0/_temporary/attempt_20180516170948_0005_m_000000_0
18/05/16 17:09:48 ERROR FileFormatWriter: Job job_20180516170948_0005 aborted.
18/05/16 17:09:48 ERROR Executor: Exception in task 0.0 in stage 5.0 (TID 4)
org.apache.spark.SparkException: Task failed while writing rows
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:191)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:190)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:108)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: ExitCodeException exitCode=-1073741515: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:575)
    at org.apache.hadoop.util.Shell.run(Shell.java:478)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:766)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:859)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:842)
    at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:661)
    at org.apache.hadoop.fs.ChecksumFileSystem$1.apply(ChecksumFileSystem.java:501)
    at org.apache.hadoop.fs.ChecksumFileSystem$FsOperation.run(ChecksumFileSystem.java:482)
    at org.apache.hadoop.fs.ChecksumFileSystem.setPermission(ChecksumFileSystem.java:498)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:467)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:433)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
    at org.apache.spark.sql.execution.datasources.CodecStreams$.createOutputStream(CodecStreams.scala:81)
    at org.apache.spark.sql.execution.datasources.CodecStreams$.createOutputStreamWriter(CodecStreams.scala:92)
    at org.apache.spark.sql.execution.datasources.json.JsonOutputWriter.<init>(JsonFileFormat.scala:140)
    at org.apache.spark.sql.execution.datasources.json.JsonFileFormat$$anon$1.newInstance(JsonFileFormat.scala:80)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.newOutputWriter(FileFormatWriter.scala:305)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:314)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:258)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:256)
    at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1375)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:261)
    ... 8 more

It doesn't say anything specific.

I'm using Windows 10 with IntelliJ with Scala, I've set the hadoop.home.dir property

Upvotes: 1

Views: 4457

Answers (4)

Jivesh Pednekar
Jivesh Pednekar

Reputation: 106

I removed hadoop.dll from hadoop home as mention in below link and it worked for me.

java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.createFileWithMode0(Ljava/lang/String;JJJI)Ljava/io/FileDescriptor

Note: without hadoop.dll error while thrown while writing to COS location.

Upvotes: 1

abhishek chaurasiya
abhishek chaurasiya

Reputation: 145

The issue will not resolve irrespective of writing into "C:\Users*" or "C:\some_dir"

I think I'm answering the question a bit late but I have resolved this issue by using another method so I thought of sharing it.

  1. Download the windows utility for hadoop winutils

  2. Extract the zip file hadoop-winutils-2.6.0.zip to C:\Users\user_name\winutils\bin

  3. In your code set the system property like

System.setProperty("hadoop.home.dir", "C:\\Users\\user_name\\winutils");

That's it. Now you can write your dataframe into any windows directory.

Upvotes: -1

Jimmy Solano
Jimmy Solano

Reputation: 11

Actually I've found out that Spark write operations don't work in Windows 10. I've ran the script over and over again in Windows 7 and worked perfectly.

Upvotes: 0

Sc0rpion
Sc0rpion

Reputation: 73

Looks like on your previous run, the temp folders are not cleaned up. This is known issue. See the link - https://issues.apache.org/jira/browse/SPARK-12216 Where you able to manually delete the temp folders here C:/Users/jsolano/IdeaProjects/Test2/test/df.json/_temporary?

Upvotes: 0

Related Questions