Viktor Khristenko
Viktor Khristenko

Reputation: 933

Spark Dataset aggregation similar to RDD aggregate(zero)(accum, combiner)

RDD has a very useful method aggregate that allows to accumulate with some zero value and combine that across partitions. Is there any way to do that with Dataset[T]. As far as I see the specification via Scala doc, there is actually nothing capable of doing that. Even the reduce method allows to do things only for binary operations with T as both arguments. Any reason why? And if there is anything capable of doing the same?

Thanks a lot!

VK

Upvotes: 3

Views: 1607

Answers (1)

zero323
zero323

Reputation: 330113

There are two different classes which can be used to achieve aggregate-like behavior in Dataset API:

Both provide additional finalization method (evaluate and finish respectively) which is used to generate final results and can be used for both global and by-key aggregations.

Upvotes: 4

Related Questions