Meier
Meier

Reputation: 3880

Spring cloud stream with kinesis: AmazonDynamoDBLockClient thows excpetions in loop

We have a quite simple spring-cloud-stream app that reads from a AWS-Kinesis queue. It has worked fro some time without problems. It has only a small workload.

Today we receive more than 100 exceptions per second. (That is more than we have users on this app, so it look like an endless retry-loop)

WARN --- [s-shard-locks-1] c.a.s.d.AmazonDynamoDBLockClient : 
    Could not acquire lock because of a client side failure in talking to DDB
com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException: 
    The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API. (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ProvisionedThroughputExceededException
...
at org.springframework.integration.aws.lock.DynamoDbLockRegistry$DynamoDbLock.doLock(DynamoDbLockRegistry.java:504) 

The code is quite simple:

    @StreamListener(Channels.OUR_CHANNEL)
fun consumeThing(Thing: MutableMap<Any, Any>) {
    log.info("thing received: {}.", thing)
    // some methods to write it to our own datbase
}

We have only a basic configuration:

spring:
  cloud:
    stream:
      defaultBinder: kinesis
      bindings:
        thingsChannel:
          group: aGroup
          destination: aDestionation

In AWS-Console, we see a warning that in DynamoBD the SpringIntegrationLockRegistry exceeds the read capycity. We put it from 1 to 10, but still have the problem.

How can I configure spring-data-cloud and kinesis that it is more resilent, and don't retry it without some waiting time?

Upvotes: 1

Views: 415

Answers (1)

Artem Bilan
Artem Bilan

Reputation: 121272

Please, take a look into possible options for LockRegistry: https://github.com/spring-cloud/spring-cloud-stream-binder-aws-kinesis/blob/master/spring-cloud-stream-binder-kinesis-docs/src/main/asciidoc/overview.adoc#lockregistry

It looks like the policy on the table in AWS Console has been changes. You might need to reconsider some timeouts and read-write capacities for that table or for the client I've mentioned.

You may read more about DynamoDB Lock Client more here, which is already out of Spring Cloud Stream control. BTW, this is the name for the project, not data...

Upvotes: 1

Related Questions