Luis Leal
Luis Leal

Reputation: 3514

Number of splits in dataset exceeds dataset split limit ,Dremio+Hive+Spark

We have a stack consisting of Hadoop+Hive+Spark+Dremio , since Spark writes many HDFS files for a single Hive partition (depending on workers) Dremio is failing when querying the table because the number of HDFS files limit is exceeded , is there any way to solve this without having to manually set a smaller number of workers in spark?(we don't want to lose spark distributed performance and benefits) .

Upvotes: 0

Views: 654

Answers (1)

Jayadeep Jayaraman
Jayadeep Jayaraman

Reputation: 2825

You can use the repartition which will create 1 file per partition. This will ensure that you have atleast 1 task per partition which will ensure that there is enough parallelism maintained in your spark job.

df.repartition($"a", $"b", $"c", $"d", $"e").write.partitionBy("a", "b", "c", "d", "e").mode(SaveMode.Append).parquet(s"$location")

Upvotes: 0

Related Questions