Yossi Shasha
Yossi Shasha

Reputation: 203

Spring cloud stream producing and consuming the same topic

I have a service that uses Spring Boot and Spring Cloud Stream. This service produce a certain topic and also consume this topic. When I start the service for the first time and this topic does not exists in Kafka the following exception is thrown:

java.lang.IllegalStateException: The number of expected partitions was: 100, but 3 have been found instead
                at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner$2.doWithRetry(KafkaTopicProvisioner.java:260) ~[spring-cloud-stream-binder-kafka-core-1.2.1.RELEASE.jar!/:1.2.1.RELEASE]
                at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner$2.doWithRetry(KafkaTopicProvisioner.java:246) ~[spring-cloud-stream-binder-kafka-core-1.2.1.RELEASE.jar!/:1.2.1.RELEASE]
                at org.springframework.retry.support.RetryTemplate.doExecute(RetryTemplate.java:286) ~[spring-retry-1.2.0.RELEASE.jar!/:na]
                at org.springframework.retry.support.RetryTemplate.execute(RetryTemplate.java:163) ~[spring-retry-1.2.0.RELEASE.jar!/:na]
                at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.getPartitionsForTopic(KafkaTopicProvisioner.java:246) ~[spring-cloud-stream-binder-kafka-core-1.2.1.RELEASE.jar!/:1.2.1.RELEASE]
                at org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder.createProducerMessageHandler(KafkaMessageChannelBinder.java:149) [spring-cloud-stream-binder-kafka-1.2.1.RELEASE.jar!/:1.2.1.RELEASE]
                at org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder.createProducerMessageHandler(KafkaMessageChannelBinder.java:88) [spring-cloud-stream-binder-kafka-1.2.1.RELEASE.jar!/:1.2.1.RELEASE]
                at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindProducer(AbstractMessageChannelBinder.java:112) [spring-cloud-stream-1.2.2.RELEASE.jar!/:1.2.2.RELEASE]
                at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindProducer(AbstractMessageChannelBinder.java:57) [spring-cloud-stream-1.2.2.RELEASE.jar!/:1.2.2.RELEASE]
                at org.springframework.cloud.stream.binder.AbstractBinder.bindProducer(AbstractBinder.java:152) [spring-cloud-stream-1.2.2.RELEASE.jar!/:1.2.2.RELEASE]
                at org.springframework.cloud.stream.binding.BindingService.bindProducer(BindingService.java:124) [spring-cloud-stream-1.2.2.RELEASE.jar!/:1.2.2.RELEASE]
                at org.springframework.cloud.stream.binding.BindableProxyFactory.bindOutputs(BindableProxyFactory.java:238) [spring-cloud-stream-1.2.2.RELEASE.jar!/:1.2.2.RELEASE]
                at org.springframework.cloud.stream.binding.OutputBindingLifecycle.start(OutputBindingLifecycle.java:57) [spring-cloud-stream-1.2.2.RELEASE.jar!/:1.2.2.RELEASE]
                at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:175) [spring-context-4.3.7.RELEASE.jar!/:4.3.7.RELEASE]
                at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:50) [spring-context-4.3.7.RELEASE.jar!/:4.3.7.RELEASE]
                at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:348) [spring-context-4.3.7.RELEASE.jar!/:4.3.7.RELEASE]
                at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:151) [spring-context-4.3.7.RELEASE.jar!/:4.3.7.RELEASE]
                at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:114) [spring-context-4.3.7.RELEASE.jar!/:4.3.7.RELEASE]

The application.yml:

spring:
  cloud:
    stream:
      kafka:
        binder:
          brokers: kafka
          defaultBrokerPort: 9092
          zkNodes: zookeeper
          defaultZkPort: 2181
          minPartitionCount: 2
          replicationFactor: 1
          autoCreateTopics: true
          autoAddPartitions: true
          headers: type,message_id
          requiredAcks: 1
          configuration:
            "[security.protocol]": PLAINTEXT #TODO: This is a workaround. Should be security.protocol
        bindings:
          test-updater-input:
            consumer:
              autoRebalanceEnabled: true
              autoCommitOnError: true
              enableDlq: true
          test-updater-output: 
            producer:
              sync: true
              configuration:
                retries: 0
          tenant-updater-output: 
            producer:
              sync: true
              configuration:
                retries: 100
      default:
        binder: kafka
        contentType: application/json
        group: test-adapter
        consumer:
          maxAttempts: 1       
      bindings:
        test-updater-input: 
          destination: test-tenant-update
          consumer:
            concurrency: 3
            partitioned: true
        test-updater-output: 
          destination: test-tenant-update
          producer:
            partitionCount: 100
        tenant-updater-output:
          destination: tenant-entity-update
          producer:
            partitionCount: 100

I tried to change the order of the configurations of the producer and the consumer but it didn't helped.

EDIT: I have added the full application.yml. This topic does not exists in Kafka when I booting the service for the first time.
It feels like there is a conflict between the producer and consumer configuration, I think that reason that it says that there are 3 partitions is that the concurrency in the consumer is 3 so it first creates the topic with 3 partitions and then when it moves to the producer configuration it's not adjusting the partition count.

Upvotes: 1

Views: 6930

Answers (1)

Gary Russell
Gary Russell

Reputation: 174564

The number of expected partitions was: 100, but 3 have been found instead

The topic has insufficient partitions for your configuration.

partitionCount: 100

Fix the configuration to 3, or change the number of partitions on the topic to 100.

Or set spring.cloud.stream.kafka.binder.autoAddPartitions to true.

Upvotes: 2

Related Questions