Boris Okner
Boris Okner

Reputation: 330

riak_core vnode distribution

I have a setup with riak_core application on 2-node cluster. The template for the application was generated following https://github.com/rzezeski/try-try-try/tree/master/2011/riak-core-first-multinode

When I look at the distribution of vnodes across 2 nodes:

{ok, Ring} = riak_core_ring_manager:get_my_ring(), 
riak_core_ring:chash(ring).

, I'm getting this:

{64,
 [{0, '[email protected]'},
  {22835963083295358096932575511191922182123945984, '[email protected]'},
  {45671926166590716193865151022383844364247891968, '[email protected]'},
  {68507889249886074290797726533575766546371837952, '[email protected]'},
  {91343852333181432387730302044767688728495783936, '[email protected]'},
  {114179815416476790484662877555959610910619729920, '[email protected]'},
  {137015778499772148581595453067151533092743675904, '[email protected]'},
  {159851741583067506678528028578343455274867621888, '[email protected]'},
  {182687704666362864775460604089535377456991567872, '[email protected]'},
  {205523667749658222872393179600727299639115513856, '[email protected]'},
  {228359630832953580969325755111919221821239459840, '[email protected]'}, 
...............<the rest of vnodes>.......................
]
}

So vnodes go in pairs such that 2 adjacent partitions belong to the same physical node. From the documentation I'd expect the adjacent partitions to belong to different physical nodes. I'd appreciate if someone could elaborate on whether the above is a bug or feature, or maybe misconfiguration from my side.

Regards, Boris

Upvotes: 1

Views: 482

Answers (2)

Boris Okner
Boris Okner

Reputation: 330

I also got the distribution where all adjacent vnodes reside on different physical nodes, by setting:

{wants_claim_fun, {riak_core_claim, wants_claim_v3}},
{choose_claim_fun, {riak_core_claim, choose_claim_v3}}

for riak_core app.

'target_n_val' does not seem to affect the distribution in this case.

Upvotes: 0

Joe
Joe

Reputation: 28356

Riak_core will default to target_n_val of 4 (at https://github.com/basho/riak_core/blob/riak_core-0.14.2/ebin/riak_core.app#L73). This is the preflist size used by the riak_core_claim module.

The claim algorithm will try to ensure that in any chain of target_n_val consecutive vnodes that no 2 reside on the same node.

If you set target_n_val to 2 in your app.config it should do a better job of not putting adjacent vnodes on the same node.

Upvotes: 2

Related Questions