Reputation: 1879
We are using Spring Cloud Stream with Confluent Schema Registry, Avro and Kafka binder. We have configured all our services in the data processing pipeline to use a shared DLQ Kafka topic to simplify the process of exception handling and be able to replay failed messages. However, it looks like that for some reason we are not able to properly extract payload messages as messages with different schemas are published to a single dlq. Hence, we are losing the track of schema of the original message.
I was wondering if there is any way we could maintain the original schema_id
of the failed messages in dlq so that it can be used for the purpose of seamless replay.
Upvotes: 1
Views: 803