Reputation: 1
The number of tasks in Spark is decided by the total number of RDD partitions at the beginning of stages. For example, when a Spark application is reading data from HDFS, the partition method for Hadoop RDD is inherited from
FileInputFormat
in MapReduce, which is affected by the size of HDFS blocks, the value ofmapred.min.split.size
and the compression method, etc.
The tasks in the screenshot took 7, 7, 4 seconds, and I want to make them balanced. Also, the stage is split into 3 tasks, are there any ways to specify Spark the number of partitions/tasks?
Upvotes: 0
Views: 2794
Reputation: 695
The task dependents on the partition. You can set the partitioner for the RDD, In the partitioner you can set the number of partitions.
Upvotes: 0