BigBen
BigBen

Reputation: 1202

How do I limit the number of connections Jetty will accept?

I'm running Jetty 7.2.2 and want to limit the number of connections it will handle, such that when it reaches a limit (eg 5000), it will start refusing connections.

Unfortunately, all the Connectors appear to just go ahead and accept incoming connections as fast as they can and dispatch them to the configured thread pool.

My problem is that I'm running in a constrained environment, and I only have access to 8K file descriptors. If I get a bunch of connections coming in I can quickly run out of file descriptors and get into an inconsistent state.

One option I have is to return an HTTP 503 Service Unavailable, but that still requires me to accept and respond to the connection - and I'd have keep track of the number of incoming connections somewhere, perhaps by writing a servlet filter.

Is there a better solution to this?

Upvotes: 20

Views: 59205

Answers (6)

Vladislav Kysliy
Vladislav Kysliy

Reputation: 3736

The more modern way to restrict input connections is ConnectionLimit class. It can be useful if you want to restrict connections before they come to application/servlet level.

Quote from JavaDoc:

This listener applies a limit to the number of connections, which when exceeded results in a call to AbstractConnector.setAccepting(boolean) to prevent further connections being received.

Code example:

   Server server = new Server();
   server.addBean(new Connection Limit(5000,server));
   ...
   server.start();

Upvotes: 0

prokoba
prokoba

Reputation: 209

acceptQueueSize

If I understand correctly, this is a lower level TCP setting, that controls the number of incoming connections that will be tracked when the server app does accept() at a slower rate than the rate if incoming connections. See the second argument to http://download.oracle.com/javase/6/docs/api/java/net/ServerSocket.html#ServerSocket(int,%20int)

This is something entirely different from the number of requests queued in the Jetty QueuedThreadPool. The requests queued there are already fully connected, and are waiting for a thread to become available in the pool, after which their processing can start.

I have a similar problem. I have a CPU-bound servlet (almost no I/O or waiting, so async can't help). I can easily limit the maximum number of threads in the Jetty pool so that thread switching overhead is kept at bay. I cannot, however, seem to be able to limit the length of the queued requests. This means that as the load grows, the response times grow respectively, which is not what I want.

I want if all threads are busy and the number of queued requests reaches N, then to return 503 or some other error code for all further requests, instead of growing the queue forever.

I'm aware that I can limit the number of simultaneous requests to the jetty server by using a load balancer (e.g. haproxy), but can it be done with Jetty alone?

P.S. After writing this, I discovered the Jetty DoS filter, and it seems it can be configured to reject incoming requests with 503 if a preconfigured concurrency level is exceeded :-)

Upvotes: 5

Kent Tong
Kent Tong

Reputation: 87

<Configure id="Server" class="org.eclipse.jetty.server.Server">
    <Set name="ThreadPool">
      <New class="org.eclipse.jetty.util.thread.QueuedThreadPool">
        <!-- specify a bounded queue -->
        <Arg>
           <New class="java.util.concurrent.ArrayBlockingQueue">
              <Arg type="int">6000</Arg>
           </New>
      </Arg>
        <Set name="minThreads">10</Set>
        <Set name="maxThreads">200</Set>
        <Set name="detailedDump">false</Set>
      </New>
    </Set>
</Configure>

Upvotes: 3

BigBen
BigBen

Reputation: 1202

I ended up going with a solution which keeps track of the number of requests and sends a 503 when the load is too high. It's not ideal, and as you can see I had to add a way to always let continuation requests through so they didn't get starved. Works well for my needs:

public class MaxRequestsFilter implements Filter {

    private static Logger cat   = Logger.getLogger(MaxRequestsFilter.class.getName());

    private static final int DEFAULT_MAX_REQUESTS = 7000;
    private Semaphore requestPasses;

    @Override
    public void destroy() {
        cat.info("Destroying MaxRequestsFilter");
    }

    @Override
    public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {

        long start = System.currentTimeMillis();
        cat.debug("Filtering with MaxRequestsFilter, current passes are: " + requestPasses.availablePermits());
        boolean gotPass = requestPasses.tryAcquire();
        boolean resumed = ContinuationSupport.getContinuation(request).isResumed();
        try {
            if (gotPass || resumed ) {
                chain.doFilter(request, response);
            } else {
                ((HttpServletResponse) response).sendError(HttpServletResponse.SC_SERVICE_UNAVAILABLE);
            }
        } finally {
            if (gotPass) {
                requestPasses.release();
            }
        }
        cat.debug("Filter duration: " + (System.currentTimeMillis() - start) + " resumed is: " + resumed);
    }

    @Override
    public void init(FilterConfig filterConfig) throws ServletException {

        cat.info("Creating MaxRequestsFilter");

        int maxRequests = DEFAULT_MAX_REQUESTS;
        requestPasses = new Semaphore(maxRequests, true);
    }

}

Upvotes: 13

Darin Wright
Darin Wright

Reputation: 127

The thread pool has a queue associated with it. By default, it is unbounded. However, when creating a thread pool you can provide a bounded queue to base it on. For example:

Server server = new Server();
LinkedBlockingQueue<Runnable> queue = new LinkedBlockingQueue<Runnable>(maxQueueSize);
ExecutorThreadPool pool = new ExecutorThreadPool(minThreads, maxThreads, maxIdleTime, TimeUnit.MILLISECONDS, queue);
server.setThreadPool(pool);

This appears to have resolved the problem for me. Otherwise, with the unbounded queue the server ran out of file handles as it started up under heavy load.

Upvotes: 11

Senthil
Senthil

Reputation: 5804

I have not deployed Jetty for my application. However used Jetty with some other opensource projects for deployment. Based on that experience: There are configuration for connector as below:

acceptors : The number of thread dedicated to accepting incoming connections.

acceptQueueSize : Number of connection requests that can be queued up before the operating system starts to send rejections.

http://wiki.eclipse.org/Jetty/Howto/Configure_Connectors

You need to add them to below block in your configuration

<Call name="addConnector">
  <Arg>
      <New class="org.mortbay.jetty.nio.SelectChannelConnector">
        <Set name="port"><SystemProperty name="jetty.port" default="8080"/></Set>
        <Set name="maxIdleTime">30000</Set>
        <Set name="Acceptors">20</Set>
        <Set name="confidentialPort">8443</Set>
      </New>
  </Arg>
</Call>

Upvotes: 5

Related Questions