Reputation: 5870
I'm using Kafka schema registry for producing/consuming Kafka messages, for example I have two fields they are both string type, the pseudo schema as following:
{"name": "test1", "type": "string"}
{"name": "test2", "type": "string"}
but after sending and consuming a while, I need modify schema to change the second filed to long type, then it threw the following exception:
Schema being registered is incompatible with an earlier schema; error code: 409
I'm confused, if schema registry can not evolve the schema upgrade/change, then why should I use Schema registry, or say why I use Avro?
Upvotes: 40
Views: 80231
Reputation: 39790
Fields cannot be renamed in BACKWARD
compatibility mode. As a workaround you can change the compatibility rules for the schema registry.
According to the docs:
The schema registry server can enforce certain compatibility rules when new schemas are registered in a subject. Currently, we support the following compatibility rules.
Backward compatibility (default): A new schema is backward compatible if it can be used to read the data written in all previous schemas. Backward compatibility is useful for loading data into systems like Hadoop since one can always query data of all versions using the latest schema.
Forward compatibility: A new schema is forward compatible if all previous schemas can read data written in this schema. Forward compatibility is useful for consumer applications that can only deal with data in a particular version that may not always be the latest version.
Full compatibility: A new schema is fully compatible if it’s both backward and forward compatible.
No compatibility: A new schema can be any schema as long as it’s a valid Avro.
Setting compatibility
to NONE
should do the trick.
# Update compatibility requirements globally
$ curl -X PUT -H "Content-Type: application/vnd.schemaregistry.v1+json" \
--data '{"compatibility": "NONE"}' \
http://localhost:8081/config
And the response should be
{"compatibility":"NONE"}
You can also set compatibility
to NONE
for a single subject:
# Update compatibility requirements for single subject
$ curl -X PUT -H "Content-Type: application/vnd.schemaregistry.v1+json" \
--data '{"compatibility": "NONE"}' \
http://localhost:8081/config/subject
I generally discourage setting compatibility to NONE
on a subject unless absolutely necessary.
Upvotes: 39
Reputation: 1
On your local environment you can execute this on Postman :
URL : http://localhost:8081/config Method : PUT Body : { "compatibility": "NONE" }
This fix solved the problem for me
Upvotes: 0
Reputation: 720
If you need just the new schema and you don't need the previous schemas from schema registry, you can delete the older schemas as mentioned below :
I've tested this with confluent-kafka and it worked for me
curl -X DELETE http://localhost:8081/subjects/Kafka-value
curl -X DELETE http://localhost:8081/subjects/Kafka-value/versions/1
curl -X DELETE http://localhost:8081/subjects/Kafka-value/versions/latest
Ref: https://docs.confluent.io/platform/current/schema-registry/schema-deletion-guidelines.html
Upvotes: 22
Reputation: 355
You can simply append a default value like this.
{"name": "test3", "type": "string","default": null}
Upvotes: 6
Reputation: 1738
https://docs.confluent.io/current/avro.html You might need to add a "default": null.
You can also delete existing one and register the updated one.
Upvotes: 7