rsgs india
rsgs india

Reputation: 31

web socket connection closed when behind proxy

I've a web sockets based chat application (HTML5).

Browser opens a socket connection to a java based web sockets server over wss.

When browser connects to server directly (without any proxy) everything works well.

But when the browser is behind an enterprise proxy, browser socket connection closes automatically after approx 2 minutes of no-activity. Browser console shows "Socket closed".

In my test environment I have a Squid-Dansguardian proxy server.

IMP: this behaviour is not observed if the browser is connected without any proxy.

To keep some activity going, I embedded a simple jquery script which will make an http GET request to another server every 60 sec. But it did not help. I still get "socket closed" in my browser console after about 2 minutes of no action.

Any help or pointers are welcome.

Thanks

Upvotes: 3

Views: 7929

Answers (2)

Myst
Myst

Reputation: 19221

This seems to me to be a feature, not a bug.

In production applications there is an issue related with what is known as "half-open" sockets - see this great blog post about it.

It happens that connections are lost abruptly, causing the TCP/IP connection to drop without informing the other party to the connection. This can happen for many different reasons - wifi signals or cellular signals are lost, routers crash, modems disconnect, batteries die, power outages...

The only way to detect if the socket is actually open is to try and send data... BUT, your proxy might not be able to safely send data without interfering with your application's logic*.

After two minutes, your Proxy assume that the connection was lost and closes the socket on it's end to save resources and allow new connections to be established.

If your proxy didn't take this precaution, on a long enough timeline all your available resources would be taken by dropped connections that would never close, preventing access to your application.

Two minutes is a lot. On Heroku they set the proxy for 50 seconds (more reasonable). For Http connections, these timeouts are often much shorter.

The best option for you is to keep sending websocket data within the 2 minute timeframe.

The Websocket protocol resolves this issue by implementing an internal ping mechanism - use it. These pings should be sent by the server and the browser responds to them with a pong directly (without involving the javascript application).

The Javascript API (at least on the browser) doesn't let you send ping frames (it's a security thing I guess, that prevents people from using browsers for DoS attacks).

A common practice by some developers (which I think to be misconstructed) is to implement a JSON ping message that is either ignored by the server or results in a JSON pong.

Since you are using Java on the server, you have access to the Ping mechanism and I suggest you implement it.

I would also recommend (if you have control of the Proxy) that you lower the timeout to a more reasonable 50 seconds limit.


* The situation during production is actually even worse...

Because there is a long chain of intermediaries (home router/modem, NAT, ISP, Gateways, Routers, Load Balancers, Proxies...) it's very likely that your application can send data successfully because it's still "connected" to one of the intermediaries.

This should start a chain reaction that will only reach the application after a while, and again ONLY if it attempts to send data.

This is why Ping frames expect Pong frames to be returned (meaning the chain of connection is intact.

P.S.

You should probably also complain about the Java application not closing the connection after a certain timeout. During production, this oversight might force you to restart your server every so often or experience a DoS situation (all available file handles will be used for the inactive old connections and you won't have room for new connections).

Upvotes: 3

Jonny Whatshisface
Jonny Whatshisface

Reputation: 180

check the squid.conf for a request_timeout value. You can change this via the request_timeout. This will affect more than just web sockets. For instance, in an environment I frequently work in, a perl script is hit to generate various configurations. Execution can take upwards of 5-10 minutes to complete. The timeout value on both our httpd and the squid server had to be raised to compensate for this.

Also, look at the connect_timeout value as well. That's defaulted to one minute..

Upvotes: 1

Related Questions