Reputation: 299
I am using Storm 1.0.2. Currently we have a small topology and we only want 1 instance of Nimbus running. However, in the rare event of our only Nimbus instance going down with some disk loss, brining up a new instance will never work. The new instance will go to ZK and because of missing topology data, will never be elected Leader and will never come up again. This is the issue we faced. The only work around I can think of is to store this data on a separate persistent disk so even if our only Nimbus instance goes down we don't lose topology jars and the next instance can be made Leader by ZK without any issues.
Am I missing something? Is there any other way to reset the ZK other than by deleting the nimbus data (somehow deleting the /storm/nimbus dir did not work) ? Is there any config to disable leader election in Nimbus for 1 instance only on staging environments?
Upvotes: 0
Views: 223
Reputation: 181
What is the reason for your using just 1 instance of Nimbus ? Is it because its staging ? I mean i liked HA nimus' fault tolerant architecture and if its possible then should just go that route of having active and standby with distributed state storage configured.
Upvotes: 1