Reputation: 4928
I want to use gRPC to expose an interface for bidirectional transfer of large data sets (~100 MB) between two services. Because gRPC imposes a 4 MB message size limit by default, it appears that the preferred way to do this is to manually code streaming of chunks, and re-assemble them at the receiving end [1][2].
However, gRPC also allows increasing the message size limit via grpc.max_receive_message_length
and grpc.max_send_message_length
, making it possible to directly transmit a message of up to ~2 GB in size, without any manual chunking or streaming. Quick testing indicates that this simpler approach works equally well in terms of performance and throughput, making it seem more desirable for this use case. Assume that the entire data set is needed in memory.
Is one of these approaches inherently better than the other? Are there any potential side effects of the simpler non-chunked approach? Can I rely on the MTU-dependent fragmentation at lower layers to be sufficient to avoid network delay and other handicaps?
References:
Upvotes: 11
Views: 11679
Reputation: 26434
The 4 MB limit is protect clients/servers who haven't thought about message size constraints. gRPC itself is fine with going much higher (100s of MBs), but most applications could be trivially attacked or accidentally go out-of-memory allowing messages of that size.
If you're willing to receive a 100 MB message all-at-once, then increasing the limit is fine.
Upvotes: 14