Ike Walker
Ike Walker

Reputation: 65547

Fully removing a decommissioned Cassandra node

Running Cassandra 1.0, I am shrinking a ring from 5 nodes down to 4. In order to do that I ran nodetool decommission on the node I want to remove, then stopped cassandra on that host and used nodetool move and nodetool cleanup to update the tokens on the remaining 4 nodes to rebalance the cluster.

My seed nodes are A and B. The node I removed is C.

That seemed to work fine for 6-7 days, but now one of my four nodes thinks the decommissioned node is still part of the ring.

Why did this happen, and what's the proper way to fully remove the decommissioned node from the ring?

Here's the output of nodetool ring on the one node that still thinks the decommissioned node is part of the ring:

Address         DC          Rack        Status State   Load            Owns    Token                                       
                                                                               127605887595351923798765477786913079296     
xx.x.xxx.xx     datacenter1 rack1       Up     Normal  616.17 MB       25.00%  0                                           
xx.xxx.xxx.xxx  datacenter1 rack1       Up     Normal  1.17 GB         25.00%  42535295865117307932921825928971026432      
xx.xxx.xx.xxx   datacenter1 rack1       Down   Normal  ?               9.08%   57981914123659253974350789668785134662      
xx.xx.xx.xxx    datacenter1 rack1       Up     Normal  531.99 MB       15.92%  85070591730234615865843651857942052864      
xx.xxx.xxx.xx   datacenter1 rack1       Up     Normal  659.92 MB       25.00%  127605887595351923798765477786913079296     

Here's the output of nodetool ring on the other 3 nodes:

Address         DC          Rack        Status State   Load            Owns    Token                                       
                                                                               127605887595351923798765477786913079296     
xx.x.xxx.xx     datacenter1 rack1       Up     Normal  616.17 MB       25.00%  0                                           
xx.xxx.xxx.xxx  datacenter1 rack1       Up     Normal  1.17 GB         25.00%  42535295865117307932921825928971026432      
xx.xx.xx.xxx    datacenter1 rack1       Up     Normal  531.99 MB       25.00%  85070591730234615865843651857942052864      
xx.xxx.xxx.xx   datacenter1 rack1       Up     Normal  659.92 MB       25.00%  127605887595351923798765477786913079296     

UPDATE: I tried to remove the node using nodetool removetoken on node B, which is the one that still claims node C is in the ring. That command ran for 5 hours and didn't seem to do anything. The only change is that the state of Node C is "Leaving" now when I run nodetool ring on Node B.

Upvotes: 4

Views: 2634

Answers (2)

akshat thakar
akshat thakar

Reputation: 1527

With Cassandra 2.0, you need to use sh nodetool decommission on node to be removed. In your case, check if you have removed entries in cassandra-topology.properties

Upvotes: 0

Ike Walker
Ike Walker

Reputation: 65547

I was able to remove the decommissioned node using nodetool removetoken, but I had to use the force option.

Here's the output of my commands:

iowalker:~$ nodetool -h `hostname` removetoken 57981914123659253974350789668785134662

<waited 5 hours, the node was still there>

iowalker:~$ nodetool -h `hostname` removetoken status
RemovalStatus: Removing token (57981914123659253974350789668785134662). Waiting for replication confirmation from [/xx.xxx.xxx.xx,/xx.x.xxx.xx,/xx.xx.xx.xxx].
iowalker:~$ nodetool -h `hostname` removetoken force
RemovalStatus: Removing token (57981914123659253974350789668785134662). Waiting for replication confirmation from [/xx.xxx.xxx.xx,/xx.x.xxx.xx,/xx.xx.xx.xxx].
iowalker:~$ nodetool -h `hostname` removetoken status
RemovalStatus: No token removals in process.

Upvotes: 3

Related Questions