Reputation: 443
This is regarding dynamic rescaling in Flink 1.5
I am using Yarn for running Flink jobs. I start these jobs with a static resource. Is there any option to scale out these job by itself in specific conditions like if there's a memory issues.
Applications can be rescaled without manually triggering a savepoint. Under
the hood, Flink will still take a savepoint, stop the application, and
rescale it to the new parallelism.
This means that I will have to monitor my jobs memory and will have to trigger rescale manually. Is these any workaround to handle this.
Upvotes: 2
Views: 634
Reputation: 416
You would still need to monitor your application, but the rescaling can be done easily by running:
./bin/flink modify -p <NEW-PARALLELISM>
Upvotes: 4
Reputation: 957
As of 1.5 Flink doesn't support what you want. The process for rescaling a job is:
initialParallelism
and maxParallelism maxParallelism
.initialParallelism <= parallelism <= maxParallelism
.Upvotes: 3