Amresh Jha
Amresh Jha

Reputation: 78

Spark Parquet read error : java.io.EOFException: Reached the end of stream with XXXXX bytes left to read

While reading parquet files in spark, if you face the below problem.


App > Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 2.0 failed 4 times, most recent failure: Lost task 0.3 in stage 2.0 (TID 44, 10.23.5.196, executor 2): java.io.EOFException: Reached the end of stream with 193212 bytes left to read App > at org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:104) App > at org.apache.parquet.io.DelegatingSeekableInputStream.readFullyHeapBuffer(DelegatingSeekableInputStream.java:127) App > at org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:91) App > at org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:1174) App > at org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:805) App > at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:301) App > at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:256) App > at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextKeyValue(VectorizedParquetRecordReader.java:159) App > at org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39) App > at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:124) App > at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:215)


For below spark commands:

val df = spark.read.parquet("s3a://.../file.parquet")
df.show(5, false)

Upvotes: 6

Views: 7106

Answers (3)

Ankur Pandey
Ankur Pandey

Reputation: 51

For me, I was getting below set of Exceptions in different spark apps:

Caused by: java.io.EOFException: Reached the end of stream with 1008401 bytes left to read
    at org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:104)
    at org.apache.parquet.io.DelegatingSeekableInputStream.readFullyHeapBuffer(DelegatingSeekableInputStream.java:127)
    at org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:91)

and

Caused by: java.io.IOException: could not read page in col [X] optional binary X (UTF8) as the dictionary was missing for encoding PLAIN_DICTIONARY
    at org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.initDataReader(VectorizedColumnReader.java:571)
    at org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readPageV1(VectorizedColumnReader.java:616)

Setting this spark config

--conf  spark.sql.parquet.enableVectorizedReader=false

solved both the issues.

Upvotes: 0

sspaeti
sspaeti

Reputation: 176

For me above didn't do the trick, but the following did:

--conf spark.hadoop.fs.s3a.experimental.input.fadvise=sequential

Not sure why, but what gave me a hint was this issue and some details about the options here.

Upvotes: 10

Pulkit Bhardwaj
Pulkit Bhardwaj

Reputation: 58

I think you can bypass this issue with

--conf  spark.sql.parquet.enableVectorizedReader=false

Upvotes: 2

Related Questions