UncleDolan
UncleDolan

Reputation: 13

http/2 dynamic table size update clarification

In the http/2 protocol we see the following statement for dynamic table size update:

SETTINGS_HEADER_TABLE_SIZE (0x1):  Allows the sender to inform the
      remote endpoint of the maximum size of the header compression
      table used to decode header blocks, in octets.  The encoder can
      select any size equal to or less than this value by using
      signaling specific to the header compression format inside a
      header block (see [COMPRESSION]).  The initial value is 4,096
      octets.

The initial size for both encoder and decoder is 4096 bytes according to RFC.

In SETTINGS frame in wireshark, i can see the new table size passed to the ENDPOINT ( google.com in this case )

0000   00 00 12 04 00 00 00 00 00 **00 01 00 01 00** 00 00
0010   04 00 02 00 00 00 05 00 00 40 00

00 01 00 01 00 is a pattern for SETTINGS_HEADER_TABLE_SIZE = 65536

What i can't understand does it actually tells the endpoint that the dynamic table used to decode the headers from this ENDPOINT inside browser is 65536 bytes long, or does it tell the ENDPOINT that ENDPOINT dynamic table size should be 65536 ?

And reversed, i assume that the ENDPOINT must sent SETTINGS_HEADER_TABLE_SIZE to tell the browser its dynamic table used for decoding the headers from ENDPOINT but i don't see that option sent back by the ENDPOINT. Can someone explain this?

Also there is a signal for dynamic table size update, mentioned in RFC, which is sent inside the HEADERS frame.

 A dynamic table size update starts with the '001' 3-bit pattern,
   followed by the new maximum size, represented as an integer with a
   5-bit prefix (see Section 5.1).

   The new maximum size MUST be lower than or equal to the limit
   determined by the protocol using HPACK.  A value that exceeds this
   limit MUST be treated as a decoding error.  In HTTP/2, this limit is
   the last value of the SETTINGS_HEADER_TABLE_SIZE parameter (see
   Section 6.5.2 of [HTTP2]) received from the decoder and acknowledged
   by the encoder (see Section 6.5.3 of [HTTP2]).

There is this line received from the decoder and acknowledged by the encoder, so does this signal is sent to limit the encoding dynamic table size ? I comletely lost, and it is not obvious from wireshark captures how this is handled correctly

UPDATE

Ok, i looked more on the logs of wireshark from firefox on the site of walmart.com ( since there is a lot of headers involved). Sometimes firefox sends the dynamic table size update signal in the headers frame, with the size smaller then the initial SETTINGS_HEADER_TABLE_SIZE sent by firefox on the beginning of connection. I wrote a firefox dynamic table on a paper and shrink it as if i expected the dynamic table size update would do. Turns out that shrinking it to smaller size produce incorrect headers.. So apparently the dynamic table size update affect only remote endpoint.. ( well i guess it is ). I also looked up on nigthttp and a c# implementation, and there they actually shrink the encoder table size, while sending dynamic table size update signal. I get a feeling that everyone have a complete different implementation for this protocol.. it's a complete nightmare to understand.

Upvotes: 0

Views: 1990

Answers (1)

Matthias247
Matthias247

Reputation: 10416

As you figured out there are multiple things which indicate the table size:

  • The maximum table size setting (as indicated in a HTTP/2 SETTINGS frame)
  • The actual used table size - which is encoded in a HEADERS frame in HPACK format

If we only look at the headers which are flowing from the client (browser) to a server we will see the following things going on:

  • As long as nobody has an information from the remote side the default values are used, which means the client expects that the server supports a maximum table size of 4kB (SETTINGS_HEADER_TABLE_SIZE) and it also uses this size as the initial table size.
  • The server can optionally inform the client through the HTTP/2 SETTINGS frame that it only supports smaller header tables. This information is contained in the SETTINGS_HEADER_TABLE_SIZE field, a SETTINGS frame which is sent from the server to the client.
  • The client can adjust the actually used [dynamic] header table size through the Dynamic Table Size Update in a HEADERS frame. This will always indicate the table size that is actually used on the encoder side - and which therefore also must be set on decoder side to be able to retrieve the same data. The sending side is free to set the actual used table size to anything between 0 and the maximum size that is supported by the remote side (in SETTINGS_HEADER_TABLE_SIZE). A typical strategy for implementations is to to always shrink the used table size when it's currently larger than what the remote supports. And to increase the table size when the remote supports bigger tables and the implementation also still can go bigger. There might be some race conditions where one end already set and used a larger table size than what the remote side actually supports, e.g. because the SETTINGS frame which indicates the lower limit was not received before a client encoded the first pair of headers. In that case the remote side might detect the use of a too big table size and reset the connection. To avoid this situations both sides of the connection should in reality at least support the default table size of 4kB, and ideally only increase the limit dynamically and never shrink it.

Now I mentioned that one pair of max. table size settings and actual table size settings is used for transmitting HEADERS from one end of the connection (client) to the other (server). But in total there is also a second pair of both, for the headers which are sent from the server to the client. For this case the client/browser also indicates in a SETTINGS frame how big the max. header table is that it supports and the server sends the size of the actual header table that is used.

Upvotes: 4

Related Questions