Anirban
Anirban

Reputation: 277

Kafka design questions - Kafka Connect vs. own consumer/producer

I need to understand when to use Kafka connect vs. own consumer/producer written by developer. We are getting Confluent Platform. Also to achieve fault tolerant design do we have to run the consumer/producer code ( jar file) from all the brokers ?

Upvotes: 11

Views: 8794

Answers (3)

Fatema Khuzaima Sagar
Fatema Khuzaima Sagar

Reputation: 395

Kafka Connect: Kafka Connect is an open-source platform which basically contains two types: Sink and Source. The Kafka Connect is used to fetch/put data from/to a database to/from Kafka. The Kafka connect helps to use various other systems with Kafka. It also helps in tracking the changes (as mentioned in one of the answers Changed Data Capture (CDC) ) from DB's to Kafka. The system maintains the offset, in order to read/write data from that particular offset to Kafka or any other database.

For more details, you can refer to https://docs.confluent.io/current/connect/index.html

The Producer/Consumer:
The Producer and Consumer are just an end system, which use the Kafka to produce and consume topics to/from Kafka. They are used where we want to broadcast the data to various consumers in a consumer group. This kind of system also maintains the lag and offsets of data for the consumer groups.

No, you don't need to run any producer/consumer while running Kafka connect. In case you want to check there is no data loss you can run the consumer while running Source Connectors. In case, of Sink Connectors, the already produced data can be verified in your database, by running their particular select queries.

Upvotes: 1

JavaTechnical
JavaTechnical

Reputation: 9357

Kafka connect is typically used to connect external sources to Kafka i.e. to produce/consume to/from external sources from/to Kafka.

Anything that you can do with connector can be done through Producer+Consumer

Readily available Connectors only ease connecting external sources to Kafka without requiring the developer to write the low-level code.

Some points to remember..

  1. If the source and sink are both the same Kafka cluster, Connector doesn't make sense
  2. If you are doing changed-data-capture (CDC) from a database and push them to Kafka, you can use a Database source connector.
  3. Resource constraints: Kafka connect is a separate process. So double check what you can trade-off between resources and ease of development.
  4. If you are writing your own connector, it is well and good, unless someone has not already written it. If you are using third-party connectors, you need to check how well they are maintained and/or if support is available.

Upvotes: 7

OneCricketeer
OneCricketeer

Reputation: 191743

do we have to run the consumer/producer code ( jar file) from all the brokers ?

Don't run client code on the brokers. Let all memory and disk access be reserved for the broker process.

when to use Kafka connect vs. own consumer/produce

In my experience, these factors should be taken into consideration

  1. You're planning on deploying and monitoring Kafka Connect anyway, and have the available resources to do so. Again, these don't run on the broker machines
  2. You don't plan on changing the Connector code very often, because you must restart the whole connector JVM, which would be running other connectors that don't need restarted
  3. You aren't able to integrate your own producer/consumer code into your existing applications or simply would rather have a simpler produce/consume loop
  4. Having structured data not tied to the a particular binary format is preferred
  5. Writing your own or using a community connector is well tested and configurable for your use cases

Connect has limited options for fault tolerance compared to the raw producer/consumer APIs, with the drawbacks of more code, depending on other libraries being used

Note: Confluent Platform is still the same Apache Kafka

Upvotes: 1

Related Questions