Matt McEwan
Matt McEwan

Reputation: 169

Kafka InvalidReceiveException Invalid receive

We get this in test and prod. It's continuous and we don't know how these errors keep coming every few seconds, we don't appear to have a feed from another system in test at least.

We have very tiny messages, a few hundred bytes at best.

This is 1.2 GB. I tried setting: socket.request.max.bytes to the 1195725856, but then got a out of memory, even though the heap size is about 2.5 GB and OpenShift container was set at max of 32GB.

Any help is very welcome!

org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 1195725856 larger than 104857600)
    at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:132)
    at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:93)
    at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:235)
    at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:196)
    at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:545)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:483)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:412)
    at kafka.network.Processor.poll(SocketServer.scala:551)
    at kafka.network.Processor.run(SocketServer.scala:468)
at java.lang.Thread.run(Thread.java:748)

Upvotes: 9

Views: 24566

Answers (7)

Peter Babbington
Peter Babbington

Reputation: 1

I was trying to connect our AWS SAP CLOUD CONNECTOR to kafka broker and was getting error below.

1.6771E+12 [2023-02-22 21:08:20,652] WARN [SocketServer brokerId=2] Unexpected error from /INTERNAL_IP; closing connection (org.apache.kafka.common.network.Selector) 1.6771E+12 org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 1212498244 larger than 104857600)

After changing the protocol in the cloud connector config for kafka node from HTTPS to TCP SSL, the connection is successful.

Upvotes: 0

ArtOfWarfare
ArtOfWarfare

Reputation: 21507

We were doing a liveness check on Kafka using curl. I think the issue is that Kafka isn't an http server and doesn't handle http requests particularly well.

We switched our health check to just this:

nc -z localhost 9091 || exit 1

This just checks whether anything at all is listening on port 9091. That's where we configured Kafka to be, so if we find something, we assume Kafka is healthy.

Upvotes: 1

user2758406
user2758406

Reputation: 596

This was solved by removing the spring.kafka.ssl.protocol property of producer.
When kakfka broker is not supporting Ssl but producer does then this issue came. It's not related to size. Because very low possibility of producer sending messages exceeding 100 mbs.

Spent nearly 90 minuntes to figure this SSL problem because I had a custom kafka producer inside a config bean.

Upvotes: 11

Tyler Stevens
Tyler Stevens

Reputation: 1

I struggled with the "InvalidReceiveException: Invalid receive (size = 1195725856 larger than 104857600)" for darn near a whole day, until finally I ran my test in debug mode and went almost line-by-line. For me it turned out to be that I had placed some kafka env variable values (KAFKA_KEY and KAFKA_SECRET in this case) into my .zshrc for use with kafkacat. Little did I know that my docker container was also picking up those values and attempting to use them with my dev env, which was causing it to have problems similar to the SSL vs. non-SSL protocol mismatch described above. So I just renamed the variables in my .zshrc and everything worked fine after that.

Upvotes: 0

Vik_Technologist
Vik_Technologist

Reputation: 61

Got same error during installation of Zookeeper and Kafka in local

It got resolved by increasing #KAFKA_HOME\config\server.properties

-- Dafult value - 104857600
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=500000000

Upvotes: 2

Matt McEwan
Matt McEwan

Reputation: 169

It was our fault were were Curling the Kafka port for a "Liveness probe" in Openshift. CURL is a Http client, Kafka uses TCP.

Will use NetCat instead.

Upvotes: 4

Giorgos Myrianthous
Giorgos Myrianthous

Reputation: 39950

Sounds like a mismatched protocol issue; maybe you are trying to connect to a non-SSL-listener. If you are using the default broker of the port, you need to verify that :9092 is the SSL listener port on that broker.

For example,

listeners=SSL://:9092
advertised.listeners=SSL://:9092
inter.broker.listener.name=SSL

should do the trick for you (Make sure you restart Kafka after re-configuring these properties).

Alternatively,you might be trying to receive a request that is too large. The maximum size is the default size for socket.request.max.bytes, which is 100MB. So if you have a message which is bigger than 100MB try to increase the value of this variable under server.properties.

Upvotes: 4

Related Questions