Reputation: 225
I am planning to dynamically scale up/down a Flink app. The app consumes events from Kafka using the kafka-flink connector.
Since the "warm up" of the app takes few minutes (caching...) and changing parallelism level involves restarts, I prefer to submit (scale up) or alternatively kill (scale down) tasks instead of changing the parallelism level.
I wonder from performance, logic and execution plan, are there any differences between this approach and the Flink built-in parallel execution?
In other words, what would be the differences between 10 identical Flink tasks to one task with parallelism level = 10 ( env.setParallelism(10) )?
Upvotes: 0
Views: 192
Reputation: 2845
The number of parallelism will determent if the task is Redistributing or not
Upvotes: 1