Reputation: 6726
I have an application in which I open a websocket from a browser (Chrome, in my case) to a server, then I start sending messages from the server side to the browser. What I am finding is that when I send messages too quickly from the server, messages start getting buffered up on the browser side. This means that the browser "falls behind" and ends up processing messages sent long ago by the server, which in my application is undesirable.
I have eliminated the following possible candidates for where this buffering is happening:
The server. I can kill the server process entirely and I see that messages continue to be received by my javascript code for several minutes, so the buffering is not happening inside the server process.
The network. I can reproduce the same issue when running the server on the same machine as my web browser, and the amount of data that I am sending is far below the bandwidth constraints for a TCP connection to localhost.
This leaves the browser. Is there any way I can (a) determine the size of the buffer chrome is maintaining for my websocket, or (b) reduce the size of this buffer and cause chrome to drop frames?
Upvotes: 7
Views: 3771
Reputation: 148
When the processing done by Javascript is trivial, Chrome can handle over 50 MB per second over a WebSocket. So it sounds like the processing you are doing is non-trivial. You can drop messages that are too old in the onmessage handler (but please bear in mind that the clock on the client may be out-of-sync with the clock on the server).
If the main thread of the browser is always busy, even dropping messages may not be enough to keep up. I recommend the "Performance" tab in Chrome Devtools as a good way to see where your application is spending its time.
Upvotes: 7