Artem Golovko
Artem Golovko

Reputation: 15

Consequences of using Jetty 12 HTTP2ServerConnectionFactory with high maxConcurrentStreams

I would like to understand what's the consequences when configuring HTTP2ServerConnectionFactory::maxConcurrentStreams to high value (for example 100k). The default value is 128, are there any reasons for that?

I have a use case when I'm using HTTP/2 for time-series streaming, e.g. subscribeOn(sensorId) produce HTTP/2 stream for specified sensorId and it's quite convenient on the client side implementation, because in case client needs to unsubscribe (sensor not visible on the screen anymore) it will just close the HTTP request and we don't need to send another request to unsubscribe or change our subscription sensors list.

But with that approach using default value of 128 is too small, since you may have 100M of sensors in system. And I was wondering should I change the API to allow subscribe on multiple sensors within the stream and introduce unsubscribe/change subscription mechanism or just increase maxConcurrentStreams to high value.

Thanks!

Upvotes: 0

Views: 27

Answers (1)

sbordet
sbordet

Reputation: 18597

I would like to understand what's the consequences when configuring HTTP2ServerConnectionFactory::maxConcurrentStreams to high value (for example 100k). The default value is 128, are there any reasons for that?

The value of 128 was chosen because most browsers and most other servers use a value in the same ballpark.

Being able to manage 128 concurrent streams seems enough for current web pages, so for web usage it should be a good default.

For non-web usage, like proxying, server-to-server communication, or non-browser client-server communication, larger values may be used, also depending on the server hardware.

There is no point in configuring, say 1000 concurrent streams, on a server running on 0.75 CPU cores in a container :D

100k or 100M concurrent streams on a single connection does not make much sense, even with your use case.

The main reason is that you risk greatly to exhaust the session flow control window with so many streams, so that the communication will be severely slowed down. With 100k streams, it is enough for each stream to send only 167 bytes to exhaust the default Jetty client session flow control window of 16 MiB.

And no, enlarging the window more will just cause more memory pressure and likely a cascade of other problems.

Using one HTTP/2 stream per sensor is obviously limited. Sending one message to unsubscribe (likely a much rarer event with respect to time series events) does not seem that much of a cost.

I would suggest that you look into CometD (disclaimer, I am the project lead).

It is a library based on Jetty that provides a highly scalable broker for web messaging. Your use case fits CometD perfectly.

For your use case you would be able to subscribe to a channel per sensor (although you can have alternatives to this scheme, like one channel for all sensors, and a sensor ID in each message, or split the sensors in groups, etc.), and receive messages for that sensor.

CometD will take care of the underlying protocol implementation details (whether to use WebSocket or HTTP).

In this way, you would concentrate on your application, rather than building a scalable infrastructure; the latter is provided by CometD.

See the extensive CometD documentation for more information: https://docs.cometd.org/

Upvotes: 1

Related Questions