Reputation: 135
Spark readStream
for Kafka fails with the following errors:
org.apache.kafka.common.errors.RecordTooLargeException (The message is 1166569 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.)
How do we bump up the max.request.size
?
Code:
val ctxdb = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", "ip:port")
.option("subscribe","topic")
.option("startingOffsets", "earliest")
.option(" failOnDataLoss", "false")
.option("max.request.size", "15728640")
We have tried to update option("max.partition.fetch.bytes", "15728640")
with no luck.
Upvotes: 3
Views: 6877
Reputation: 149518
You need to add the kafka
prefix to the writer stream setting:
.option("kafka.max.request.size", "15728640")
Upvotes: 4