Michal Kordas
Michal Kordas

Reputation: 10925

Stop job instead of retrying for particular exceptions in Apache Flink

I'm using default restart strategy for my jobs and it works fine in case of issues that possibly might be resolved after some time (no network, out of memory, Kafka unavailable etc.) However, there are some exceptions that usually mean bug in the code (e.g. NullPointerException or any other unhandled one), and in such cases I don't want to apply any restart strategy, as any number of restarts won't resolve the issue.

Is there any way to stop a job from inside a job in such cases despite configured strategy?

Upvotes: 2

Views: 936

Answers (1)

SeedofWInd
SeedofWInd

Reputation: 147

I think Flink currently does not support what you try to achieve. But One potential solution is to flip this around.

  1. Set the restart strategy to no retry.
  2. catch the exception that you think that will be resolved after some time (for example, network blip) and retry in place
  3. for other failure cases, throw to stop the job

Upvotes: 2

Related Questions