Mitro
Mitro

Reputation: 1260

Is it possible to set groupId in Spring Boot Stream Kafka at startup or compile-time?

I just started using Spring Boot Stream with Kafka.

I created a producer and a consumer. What I need is to have two identical consumer (pratically two microservices) but with different groupId, so both of them will read the topic and get the same message.

Now I have the groupId in the spring boot application project under resources in properties.yml file, is it possible to set this value at compile time as a parameter or better at startup?

properties.yml

server:
    port: 8087
eureka:
    client:
        serviceUrl:
            defaultZone: http://IP:8761/eureka
spring:
    application:
        name: employee-producer
    cloud:
        stream:
            kafka:
                binder:
                    brokers: IP:9092
                bindings:
                    greetings-in:
                        destination: greetings
                        contentType: application/json
                    greetings-out:
                        destination: greetings
                        contentType: application/json
    kafka:
                           consumer:
                             group-id: 500
                             client-id: 99

Something this: kafka consumers

Upvotes: 2

Views: 7974

Answers (1)

Ryuzaki L
Ryuzaki L

Reputation: 40058

According to requirement you need two consumers of different group (which is group.id) on same topic so that every message can be consumed by both consumers

According to documentation group.id

A unique string that identifies the consumer group this consumer belongs to. This property is required if the consumer uses either the group management functionality by using subscribe(topic) or the Kafka-based offset management strategy.

group.id is need to be set at the time initialization of kafkaconsumerfactory

 props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);

Whenever new group consumer with qnique group.id is added to topic it will consume the latest messages because by default auto.offset.reset is latest

For Example:

  1. first send 5 messages to kafka
  2. now add new consumer (it won't consume those messages because default offset is latest)

To make it consume those messages offset should be specified to earliest

Upvotes: 3

Related Questions