Reputation: 1988
What happens if a client takes longer to process messages then the rate at which messages come in?
Let me write to dummy code to illustrate what I'm trying to ask
async def process(message):
"""Process one message, may take up to 1000ms"""
# some actual work would happen here
await asyncio.sleep(random.random())
async def process_stream(address):
"""Process all messages as they arrive"""
async with websockets.connect(address) as websocket:
# expect one message every 100ms
async for message in websocket:
# process each message for up to 1000ms
await process(message)
asyncio.run(process_stream("wss://place-of-interest"))
In this case we are receiving messages at a rate 10x higher than we process. Processing will occasionally or even regularly fall behind as new messages are supposed to arrove.
Are websockets implemented/guaranteed to have some cache for messages until the application is ready to process them? Obviously if this was a real application this would be problematic but what if occasionally it takes longer to process a message than the rate at which they get sent? Can messages get dropped in this manner?
(in case a concrete websocket implementation is desired, I'm working with the tokio-tungstenite
implementation of a websocket)
Upvotes: 1
Views: 698
Reputation: 23394
Websockets are a high-level protocol over TCP, which includes a flow control mechanism that reduces the transmission rate if the server can't keep up. Basically:
E_WOULDBLOCK
error allowing the client application to handle the slowdown.Note that in case of the occasional glitch, the client and server buffers are usually enough to smooth things over until the application catches up.
Upvotes: 3