Reputation: 639
My Foundry transform is producing different amount of data on different runs, but I want to have similar amount of rows in each file. I can use DataFrame.count() and then coalesce/repartition, but that requires computing the full dataset and then either caching or recomputing it again. Is there a way for Spark to take care of this?
Upvotes: 0
Views: 165
Reputation: 193
proggeo
's answer is useful if the only thing you care about is the number of records per file. However, sometimes it is useful to bucket your data so Foundry is able to optimize downstream operations like Contour Analysis or other transforms.
In those cases you can use something like:
bucket_column = 'equipment_number'
num_files = 8
output_df = output_df.repartition(num_files, bucket_column)
output.write_dataframe(
output_df,
bucket_cols=[bucket_column],
bucket_count=num_files,
)
If your bucket column is well distributed this will work to keep a similar number of rows per dataset file.
Upvotes: 2
Reputation: 639
You can use spark.sql.files.maxRecordsPerFile configuration option by setting it per output of @transform:
output.write_dataframe(
output_df,
options={"maxRecordsPerFile": "1000000"},
)
Upvotes: 0