Reputation: 9324
Here is my scenario: I maintain a service that acts primarily as an API gateway. It receives an HTTP REST request, makes multiple GRPC service calls and then combines the responses into a contextual response.
This service is running Jetty, currently configured with 250 threads.
I have several different back-end GRPC services that I call, and for each service, I'm currently creating one ManagedChannel and one BlockingStub, which I'm sharing across all of the worker threads.
I know that this is fine, since both the Channel and Stub are thread-safe, and there is no shared state amongst my threads (all my requests are idempotent).
However, I'm curious if this is the "right" way to do things. I've read some other items about pooling Channels, or having one channel and multiple Stubs, but if I'm not hitting the I/O limit for a Channel, I can't see the benefit (since under the hood, each ClientCall executes in the calling thread).
Is there a specific pointer to Java GRPC 'best practice' documentation that would help me with this?
Upvotes: 3
Views: 2168
Reputation: 26394
It sounds like what you're doing is fine. Sharing the ManagedChannel
as much as reasonable/possible is the most important piece. It doesn't really matter whether you share stubs or not, nor whether you share interceptors. It's a bit unclear whether you could share ManagedChannel
s across services (if any of the channels are to the same target).
You are right that some use-cases may want a "pool" of Channels for higher byte throughput, but this is a minority case. Also, even in that case you can "hide" that logic by creating a Channel
(or even implement ManagedChannel
) that does round-robin across multiple ManagedChannel
s, and share that "one" Channel as much as possible.
Upvotes: 2