Reputation: 957
Lets say I have a record with 4 identifier variables: var1
, var2
, var3
, var4
and an additional variable: var5
.
I want to perform a reduce operation on all the records that have the same value in either one of the identifier fields. I'm trying to think how can I implement this kind of solution with a minimal amount of shuffling.
Is there an option to tell Spark to put all the records that have at least one match in the identifier variables on the same partition? I know that there is an option of custom partitioner but I'm not sure if it is possible to implement it in a way that will support my use case.
Upvotes: 2
Views: 1077
Reputation: 330093
Well, quite a lot depends on a structure of your data in general and how much a priori knowledge you have.
In the worst case scenario, when your data is relatively dense and uniformly distributed like below and you perform one-off analysis, the only way to achieve your goal seems to be to put everything into one partition.
[1 0 1 0]
[0 1 0 1]
[1 0 0 1]
Obviously it is not a very useful approach. One thing you can try instead is to analyze at least a subset of your data to get insight into its structure and try to use this knowledge to build a custom partitioner which assures relatively low traffic and reasonable distribution over the cluster at the same time.
As a general framework it would try one of these:
Both solutions are computationally intensive and require quite lot of work to implement so it is probably not worth all the fuss for ad hoc analytics but if you have reusable pipeline it may be worth trying.
Upvotes: 1
Reputation: 27455
This is not generally possible. Imagine you have X
with keys (x, x, x, x)
and Y
with (y, y, y, y)
. No reason to put them in the same partition, right? But now comes Z
with keys (x, x, y, y)
. This has to be in the same partition as X
and also in the same partition as Y
. It is not possible.
I suggest just taking the shuffle. Create 4 RDDs, each partitioned by a different key.
Upvotes: 1