tadamhicks
tadamhicks

Reputation: 925

Spark write.avro creates individual avro files

I have a spark-submit job I wrote that reads an in directory of json docs, does some processing on them using data frames, and then writes to an out directory. For some reason, though, it creates individual avro, parquet or json files when I use df.save or df.write methods.

In fact, I even used the saveAsTable method and it did the same thing with parquet.gz files in the hive warehouse.

It seems to me that this is inefficient and negates the use of a container file format. Is this right? Or is this normal behavior and what I'm seeing just an abstraction in HDFS?

If I am right that this is bad, how do I write the data frame from many files into a single file?

Upvotes: 2

Views: 3065

Answers (1)

Ram Ghadiyaram
Ram Ghadiyaram

Reputation: 29155

As @zero323 told its normal behavior due to many workers(to support fault tolerance).

I would suggest you to write all the records in parquet or avro file which has avro generic record using something like this

 dataframe.write().mode(SaveMode.Append).
   format(FILE_FORMAT).partitionBy("parameter1", "parameter2").save(path);

but it wont write in to single file but it will group similar kind of Avro Generic records to one file(may be less number of medium sized) files

Upvotes: 2

Related Questions