Reputation: 70241
We're running a web app on Tomcat 6 and Apache mod_proxy 2.2.3. Seeing a lot of 502 errors like this:
Bad Gateway! The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request GET /the/page.do.
Reason: Error reading from remote server
If you think this is a server error, please contact the webmaster.
Error 502
Tomcat has plenty of threads, so it's not thread-constrained. We're pushing 2400 users via JMeter against the app. All the boxes are sitting inside our firewall on a fast unloaded network, so there shouldn't be any network problems.
Anyone have any suggestions for things to look at or try? We're heading to tcpdump next.
UPDATE 10/21/08: Still haven't figured this out. Seeing only a very small number of these under load. The answers below haven't provided any magical answers...yet. :)
Upvotes: 57
Views: 247588
Reputation: 2480
If you want to handle your webapp's timeout with an apache load balancer, you first have to understand the different meaning of timeout
.
I try to condense the discussion I found here: http://apache-http-server.18135.x6.nabble.com/mod-proxy-When-does-a-backend-be-considered-as-failed-td5031316.html :
It appears that
mod_proxy
considers a backend as failed only when the transport layer connection to that backend fails. Unlessfailonstatus/failontimeout
is used. ...
So, setting failontimeout
is necessary for apache to consider a timeout of the webapp (e.g. served by tomcat) as a fail (and consecutively switch to the hot spare server). For the proper configuration, note the following misconfiguration:
ProxyPass / balancer://localbalance/ failontimeout=on timeout=10 failonstatus=50
This is a misconfiguration because:
You are defining a
balancer
here, so thetimeout
parameter relates to thebalancer
(like the two others). However for abalancer
, thetimeout
parameter is not a connection timeout (like the one used withBalancerMember
), but the maximum time to wait for a free worker/member (e.g. when all the workers are busy or in error state, the default being to not wait).
So, a proper configuration is done like this
timeout
at the BalanceMember
level: <Proxy balancer://mycluster>
BalancerMember http://member1:8080/svc timeout=6
... more BalanceMembers here
</Proxy>
failontimeout
on the balancer
ProxyPass /svc balancer://mycluster failontimeout=on
Restart apache.
Upvotes: 0
Reputation: 4305
Sample from apache conf:
#Default value is 2 minutes
**Timeout 600**
ProxyRequests off
ProxyPass /app balancer://MyApp stickysession=JSESSIONID lbmethod=bytraffic nofailover=On
ProxyPassReverse /app balancer://MyApp
ProxyTimeout 600
<Proxy balancer://MyApp>
BalancerMember http://node1:8080/ route=node1 retry=1 max=25 timeout=600
.........
</Proxy>
Upvotes: 4
Reputation: 8128
I know this does not answer this question, but I came here because I had the same error with nodeJS server. I am stuck a long time until I found the solution. My solution just adds slash or /
in end of proxyreserve apache.
my old code is:
ProxyPass / http://192.168.1.1:3001
ProxyPassReverse / http://192.168.1.1:3001
the correct code is:
ProxyPass / http://192.168.1.1:3001/
ProxyPassReverse / http://192.168.1.1:3001/
Upvotes: 5
Reputation: 9222
You can use proxy-initial-not-pooled
See http://httpd.apache.org/docs/2.2/mod/mod_proxy_http.html :
If this variable is set no pooled connection will be reused if the client connection is an initial connection. This avoids the "proxy: error reading status line from remote server" error message caused by the race condition that the backend server closed the pooled connection after the connection check by the proxy and before data sent by the proxy reached the backend. It has to be kept in mind that setting this variable downgrades performance, especially with HTTP/1.0 clients.
We had this problem, too. We fixed it by adding
SetEnv proxy-nokeepalive 1
SetEnv proxy-initial-not-pooled 1
and turning keepAlive
on all servers off.
mod_proxy_http is fine in most scenarios but we are running it with heavy load and we still got some timeout problems we do not understand.
But see if the above directive fits your needs.
Upvotes: 12
Reputation: 9616
You can avoid global timeouts or having to virtual hosts by specifying the proxy timeouts in the ProxyPass directive as follows:
ProxyPass /svc http://example.com/svc timeout=600
ProxyPassReverse /svc http://example.com/svc timeout=600
Notice timeout=600
seconds.
However this does not always work when you have load balancer. In that case you must add the timeouts in both the places (tested in Apache 2.2.31)
Load Balancer example:
<Proxy "balancer://mycluster">
BalancerMember "http://member1:8080/svc" timeout=600
BalancerMember "http://member2:8080/svc" timeout=600
</Proxy>
ProxyPass /svc "balancer://mycluster" timeout=600
ProxyPassReverse /svc "balancer://mycluster" timeout=600
A side note: the timeout=600
on ProxyPass
was not required when Chrome was the client (I don;t know why) but without this timeout on ProxyPass
Internet Explorer (11) aborts saying connection reset by server.
My theory is that the :
ProxyPass
timeout is used between the client(browser) and the Apache.
BalancerMember
timeout is used between the Apache and the backend.
To those who use Tomcat or other backed you may also want to pay attention to the HTTP Connector timeouts.
Upvotes: 3
Reputation: 1
you should be able to get this problem resolved through a timeout and proxyTimeout parameter set to 600 seconds. It worked for me after battling for a while.
Upvotes: 2
Reputation: 271
Just to add some specific settings, I had a similar setup (with Apache 2.0.63 reverse proxying onto Tomcat 5.0.27).
For certain URLs the Tomcat server could take perhaps 20 minutes to return a page.
I ended up modifying the following settings in the Apache configuration file to prevent it from timing out with its proxy operation (with a large over-spill factor in case Tomcat took longer to return a page):
Timeout 5400
ProxyTimeout 5400
ProxyTimeout alone wasn't enough. Looking at the documentation for Timeout I'm guessing (I'm not sure) that this is because while Apache is waiting for a response from Tomcat, there is no traffic flowing between Apache and the Browser (or whatever http client) - and so Apache closes down the connection to the browser.
I found that if I left the Timeout setting at its default (300 seconds), then if the proxied request to Tomcat took longer than 300 seconds to get a response the browser would display a "502 Proxy Error" page. I believe this message is generated by Apache, in the knowledge that it's acting as a reverse proxy, before it closes down the connection to the browser (this is my current understanding - it may be flawed).
The proxy error page says:
Proxy Error
The proxy server received an invalid response from an upstream server. The proxy server could not handle the request GET.
Reason: Error reading from remote server
...which suggests that it's the ProxyTimeout setting that's too short, while investigation shows that Apache's Timeout setting (timeout between Apache and the client) that also influences this.
Upvotes: 47
Reputation: 4305
Most likely you should increase Timeout parameter in apache conf (default value 120 sec)
Upvotes: 1
Reputation: 70241
So, answering my own question here. We ultimately determined that we were seeing 502 and 503 errors in the load balancer due to Tomcat threads timing out. In the short term we increased the timeout. In the longer term, we fixed the app problems that were causing the timeouts in the first place. Why Tomcat timeouts were being perceived as 502 and 503 errors at the load balancer is still a bit of a mystery.
Upvotes: 15
Reputation: 5765
I'm guessing your using mod_proxy_http (or proxy balancer).
Look in your tomcat logs (localhost.log, or catalina.log) I suspect your seeing an exception in your web stack bubbling up and closing the socket that the tomcat worker is connected to.
Upvotes: 3