JanOels
JanOels

Reputation: 141

Can Apache Flink achieve end-to-end-exactly-once with built-in connectors in Table-API/SQL?

i want to know, if Apache Flink (v1.11) can achieve end-to-end-exactly-once semantic with the built-in connectors (Kafka, JDBC, File) using Table-API/SQL?

I can't find anything regarding this in the documentation. Only that i can enable Checkpointing in EXACTLY_ONCE Mode.

Upvotes: 1

Views: 512

Answers (1)

snntrable
snntrable

Reputation: 921

This depends on exactly which connectors you use/combine on the source/sink side.

Source

  • Kafka supports exactly-once
  • Filesystem supports exactly-once
  • JDBC is not available as a streaming source yet. Checkout [2] if that's your requirement.

Sink

  • Kafka supports at-least-once (Flink 1.11) and exactly-once (Flink 1.12) [1]
  • Filesystem supports exactly-once.
  • JDBC supports exactly-once if the table has a primary key by performing upserts in the database. Otherwise at-least-once.

[1] https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/table/connectors/kafka.html#consistency-guarantees

[2] https://github.com/ververica/flink-cdc-connectors

Upvotes: 3

Related Questions