Reputation: 12998
Given a Flink streaming job which applies a map()
operation to the stream.
This map()
operation reads its configuration from some properties, and maps data accordingly. For example, the configuration specifies to read attribute "input", and write it using a different attribute name "output" to the stream. This already works fine.
Now the configuration changes, for example the transformation is to use a different attribute name for the output.
Therefore I am looking for a way to let all Flink tasks reread a new configuration at run-time.
Is there a possibility
KafkaSource
KafkaSource
programmatically in Flink without redeployment?
In case it matters
KafkaSource
, JdbcSink
, KafkaSink
as provided by Flink.Upvotes: 0
Views: 671
Reputation: 9245
Normally you would broadcast your configuration stream changes, which means they would be sent to every instance of the operator that is doing the attribute transformation. See https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/broadcast_state/ for an example of applying a Rule to a stream of Shapes, which seems to mimic some of your requirements.
Upvotes: 1
Reputation: 3864
Normally, this is done by reading config changes in a Stream
, and then using connect
operation. This way You can handle mapping of You data stream using map1
function and then if any change in config is detected it can be handled in map2
and stored in state and You can make map1
function depend on the last received config change.
Not sure if that solves Your issue but seems it should work just fine.
Upvotes: 2