Reputation: 5974
By default, when a Storm spout or bolt encounters an exception, it restarts the spout or bolt and tries again. Is there any configuration option to make it stop the topology, perhaps after N repeated attempts? (For example, Hadoop tries 4 times before giving up.)
I had a Storm topology run for 77 days with one bolt raising an exception on every tuple. In situations like that, I'd rather it fail so that I notice that something's wrong.
Upvotes: 2
Views: 1717
Reputation: 5918
As far as I have seen, Storm wont retry a tuple (that caused an Exception by itself). It will by default, just continue to process the next tuple. Same tuple wont be re-tried, unless Spout has a fail method implemented.
Upvotes: 0
Reputation: 20245
There is no option for halting the topology (currently). And honestly, killing the whole topology just because an exception is brute force IMHO.
In your scenario, those exceptions should be handled in the application layer.
Is there any configuration option to make it stop the topology, perhaps after N repeated attempts?
There is no ready solution for that but you can do that and keep track the retried tuples in the Spout. If a threshold is met, then log the tuple or send it to a messaging queue.
I had a Storm topology run for 77 days with one bolt raising an exception on every tuple.
Then maybe there is a bug in your bolt's code?
One strategy is to send failed tuples to a massage queue or an event bus (such as HornetQ, Apache Kafka, Redis) and having a listener so you will be notified immediately about a poisonous tuple.
Upvotes: 2