ThisaruG
ThisaruG

Reputation: 3412

Create Kafka cluster for testing Java-based Kafka clients

I have a Kafka client and I need to test the functionalities of it. For this I need to create a Kafka Cluster locally, and connect to it. As per the limitations, I can't use a Docker image or K8s for this.

I did a search, and found This class used in testing, but I can't change the broker ports.

I tried using Debezium KafkaCluster but it fails to create the cluster intermittently, hence causes intermittent test failures.

Is there a way to create a Kafka Cluster locally, to run integration tests for java-based Kafka clients?

Upvotes: 0

Views: 862

Answers (3)

ThisaruG
ThisaruG

Reputation: 3412

Well, the posted answers helped, but I wanted something different. Ended up creating my own implementation. If anyone wants, they can use it.

I used kafka.server.KafkaServer and org.apache.zookeeper.server.ZooKeeperServerMain for this.

Upvotes: 0

mjuarez
mjuarez

Reputation: 16824

I do this all the time. A Kafka "cluster" is really just at least one Kafka server, and at least one Zookeeper server. Both of them can run locally on the same box you're developing on. Of course, a setup with only one kafka server and zookeeper node is not recommended for production, since there is no redundancy, but it's great for testing end-to-end, debugging, etc.

Here are the commands I use to run the above locally. I'm assuming you downloaded the latest Kafka confluent package, and are in the bin directory:

nohup ./zookeeper-server-start ../etc/kafka/zookeeper.properties > /dev/null 2>&1 &
nohup ./kafka-server-start ../etc/kafka/server-original.properties > /dev/null 2>&1 &

I also need the schema registry to run locally, so I add this:

nohup ./schema-registry-start ../etc/schema-registry/schema-registry.properties >/dev/null 2>&1 &

Note that all the commands have nohup at the beginning, and all the output is ignored by sending it to /dev/null. You can remove those if you actually need to see the log output on the console, or you can redirect it to an actual log file as needed.

What's nice with the above is that you can package them up in a script (make sure to add a sleep 5 in between each command to wait for the previous server to come online before going to the next one), and then you can simply run the script every time you need to run the cluster locally. Note, however, that you'll need to kill the processes later by hand, i.e., find their pid number using ps, issuing a kill command, etc.

Upvotes: 1

Yannick
Yannick

Reputation: 1408

I know this sounds like only giving you a link, but I used testcontainer library that does the job fine.

Nothiing special to add, except giving you a link to the doc: https://www.testcontainers.org/modules/kafka/

Upvotes: 1

Related Questions