Reputation: 401
For example, I want to save a table, what is the difference between the two strategies?
someDF.write.format("parquet")
.bucketBy(4, "country")
.mode(SaveMode.OverWrite)
.saveAsTable("someTable")
someDF.write.format("parquet")
.partitionBy("country") # <-- here is the only difference
.mode(SaveMode.OverWrite)
.saveAsTable("someTable")
I guess, that bucketBy in first case creates 4 directories with countries, while partitionBy will create as many directories as many unique values in column "countries". is it correct understanding ?
Upvotes: 11
Views: 9501
Reputation: 1455
Some differences:
bucketBy
is only applicable for file-based data sources in combination with DataFrameWriter.saveAsTable() i.e. when saving to a Spark managed table, whereas partitionBy
can be used when writing any file-based data sources.bucketBy
is intended for the write once, read many times scenario, where the up-front cost of creating a persistent bucketised version of a data source pays off by avoiding a costly shuffle on read in later jobs. Whereas partitionBy
is useful to meet the data layout requirements of downstream consumers of the output of a Spark job.I guess, that bucketBy in first case creates 4 directories with countries, while partitionBy will create as many directories as many unique values in column "countries". is it correct understanding?
Yes, for partitionBy
. However bucketBy
will create 4 bucket files (Parquet by default).
Upvotes: 8
Reputation: 307
Unlike bucketing in Apache Hive, Spark SQL creates the bucket files per the number of buckets and partitions. In other words, the number of bucketing files is the number of buckets multiplied by the number of task writers (one per partition).
You could also use bucketBy along with partitionBy, by which each partition (last level partition in case of multilevel paritioning) will have 'n' number of buckets.
Upvotes: 0