user3030878
user3030878

Reputation: 275

Spark dataframe write method writing many small files

I've got a fairly simple job coverting log files to parquet. It's processing 1.1TB of data (chunked into 64MB - 128MB files - our block size is 128MB), which is approx 12 thousand files.

Job works as follows:

 val events = spark.sparkContext
  .textFile(s"$stream/$sourcetype")
  .map(_.split(" \\|\\| ").toList)
  .collect{case List(date, y, "Event") => MyEvent(date, y, "Event")}
  .toDF()

df.write.mode(SaveMode.Append).partitionBy("date").parquet(s"$path")

It collects the events with a common schema, converts to a DataFrame, and then writes out as parquet.

The problem I'm having is that this can create a bit of an IO explosion on the HDFS cluster, as it's trying to create so many tiny files.

Ideally I want to create only a handful of parquet files within the partition 'date'.

What would be the best way to control this? Is it by using 'coalesce()'?

How will that effect the amount of files created in a given partition? Is it dependent on how many executors I have working in Spark? (currently set at 100).

Upvotes: 23

Views: 32199

Answers (7)

sakjung
sakjung

Reputation: 11

For PySpark Users,

I recommend using RepartiPy to easily handle the small file problem.

import repartipy

128_mib_in_bytes = 134217728

# Use this if you have enough (executor) memory to cache the whole DataFrame
# If you have NOT enough memory (i.e. too large DataFrame), use 'repartipy.SamplingSizeEstimator' instead.
with repartipy.SizeEstimator(spark=spark, df=df) as se:
    N = se.get_desired_partition_count(desired_partition_size_in_bytes=128_mib_in_bytes)
    se.reproduce().repartition(N).write.mode("append").partitionBy("date").parquet(f"{path}")
    # or
    se.reproduce().coalesce(N).write.mode("append").partitionBy("date").parquet(f"{path}")

Above code will calculate N by which each partition will have 128MiB after the repartition. Please see the docs or blog post for more details.

If you want similar approach for Scala or Java, you can reference the source code of RepartiPy to create your own utils.

Upvotes: 0

Jeff A.
Jeff A.

Reputation: 81

how about trying running scripts like this as map job consolidating all the parquet files into one:

$ hadoop jar /usr/hdp/2.3.2.0-2950/hadoop-mapreduce/hadoop-streaming-2.7.1.2.3.2.0-2950.jar \
 -Dmapred.reduce.tasks=1 \
 -input "/hdfs/input/dir" \
 -output "/hdfs/output/dir" \
 -mapper cat \
 -reducer cat

Upvotes: 0

Narfanator
Narfanator

Reputation: 5803

Duplicating my answer from here: https://stackoverflow.com/a/53620268/171916

This is working for me very well:

data.repartition(n, "key").write.partitionBy("key").parquet("/location")

It produces N files in each output partition (directory), and is (anecdotally) faster than using coalesce and (again, anecdotally, on my data set) faster than only repartitioning on the output.

If you're working with S3, I also recommend doing everything on local drives (Spark does a lot of file creation/rename/deletion during write outs) and once it's all settled use hadoop FileUtil (or just the aws cli) to copy everything over:

import java.net.URI
import org.apache.hadoop.fs.{FileSystem, FileUtil, Path}
// ...
  def copy(
          in : String,
          out : String,
          sparkSession: SparkSession
          ) = {
    FileUtil.copy(
      FileSystem.get(new URI(in), sparkSession.sparkContext.hadoopConfiguration),
      new Path(in),
      FileSystem.get(new URI(out), sparkSession.sparkContext.hadoopConfiguration),
      new Path(out),
      false,
      sparkSession.sparkContext.hadoopConfiguration
    )
  }

Upvotes: 2

eliasah
eliasah

Reputation: 40360

The simplest solution would be to replace your actual partitioning by :

df
 .repartition(to_date($"date"))
 .write.mode(SaveMode.Append)
 .partitionBy("date")
 .parquet(s"$path")

You can also use more precise partitioning for your DataFrame i.e the day and maybe the hour of an hour range. and then you can be less precise for writer. That actually depends on the amount of data.

You can reduce entropy by partitioning DataFrame and the write with partition by clause.

Upvotes: 4

user10519067
user10519067

Reputation:

In Python you can rewrite Raphael's Roth answer as:

(df
  .repartition("date")
  .write.mode("append")
  .partitionBy("date")
  .parquet("{path}".format(path=path)))

You might also consider adding more columns to .repartition to avoid problems with very large partitions:

(df
  .repartition("date", another_column, yet_another_colum)
  .write.mode("append")
  .partitionBy("date)
  .parquet("{path}".format(path=path)))

Upvotes: 6

Jai Prakash
Jai Prakash

Reputation: 2759

I came across the same issue and I could using coalesce solved my problem.

df
  .coalesce(3) // number of parts/files 
  .write.mode(SaveMode.Append)
  .parquet(s"$path")

For more information on using coalesce or repartition you can refer to the following spark: coalesce or repartition

Upvotes: 3

Raphael Roth
Raphael Roth

Reputation: 27373

you have to repartiton your DataFrame to match the partitioning of the DataFrameWriter

Try this:

df
.repartition($"date")
.write.mode(SaveMode.Append)
.partitionBy("date")
.parquet(s"$path")

Upvotes: 18

Related Questions