Reputation: 1106
I am using Kafka confluent schema registry docker image, when I test it locally (with a kafka installed locally, this works as expected, but when I try to use it with a remote Kafka cluster, I got an error :
{"error_code":40401,"message":"Subject not found. io.confluent.rest.exceptions.RestNotFoundException: Subject not found.\nio.confluent.rest.exceptions.RestNotFoundException: Subject not found.\n\tat io.confluent.kafka.schemaregistry.rest.exceptions.Errors.subjectNotFoundException(Errors.java:51)\n\tat io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource.listVersions(SubjectVersionsResource.java:157)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(Delegat
ingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:498)\n\tat
below is the command I use it to run the docker
docker run --network host -p 8081:8081 -e SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS=first_broker:9092,second_broker:9092,third_broker:9092 -e SCHEMA_REGISTRY_HOST_NAME=0.0.0.0 -e SCHEMA_REGISTRY_LISTENERS=http://0.0.0.0:8081 -e SCHEMA_REGISTRY_DEBUG=true confluentinc/cp-schema-registry:latest
The error stack I got is :
Producer clientId=producer-1] Updated cluster metadata updateVersion 2 to MetadataCache{cluster=Cluster(id = dIU-fffyfHXRDeVgZA4fud_eBw, nodes = [first_broker:9092 (id: 2 rack: subnret-0ecf514e9ghg94d5197a7), second_broker:9092 (id: 1 rack: subrnet-0befbedzd392e5497137), third_broker:9092 (id: 3 rack: subnret-0rrc00cc1dbd14c0350)], partitions = [Partition(topic = topics, partition = 0, leader = 1, replicas = [1,3,2], isr = [1,3,2], offlineReplicas = [])], controller = first_broker:9092 (id: 3 rack: subnret-0c0rr0cc1dbd14c0350))}
Sending POST with input {"schema":"\"string\""} to http://0.0.0.0:8081/subjects/topicName-value/versions
org.apache.kafka.common.errors.SerializationException: Error serializing Avro message
Caused by: java.net.SocketException: Unexpected end of file from server
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:851)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:8
I noticed that in the remote Kafka cluster I got the _schemas topic created But when I use the consumer console to read the data from this topic _shemas
, I got the following results :
{"keytype":"NOOP","magic":0}-null
{"keytype":"NOOP","magic":0}-null
{"keytype":"NOOP","magic":0}-null
{"keytype":"NOOP","magic":0}-null
{"keytype":"NOOP","magic":0}-null
{"keytype":"NOOP","magic":0}-null
{"keytype":"NOOP","magic":0}-null
{"keytype":"NOOP","magic":0}-null
{"keytype":"NOOP","magic":0}-null
{"keytype":"NOOP","magic":0}-null
{"keytype":"NOOP","magic":0}-null
{"keytype":"NOOP","magic":0}-null
{"keytype":"NOOP","magic":0}-null
Any idea how to fix this.
Upvotes: 1
Views: 7340
Reputation: 191671
SCHEMA_REGISTRY_HOST_NAME
should be a resolvable host name, and not 0.0.0.0
Similarly, don't use http://0.0.0.0:8081
in your producer code.
The listeners are the bind address, but they can be left out as well, as long as you have the port forwarded, and you remove --network host
You can ignore the NOOP
messages from the Registry (it spits two of those out at startup to find the very end of the topic)
Upvotes: 3