Reputation: 50756
We're upgrading from Apache HttpClient 4.5 to 5.2, and it seems there's been a significant change to how the pool timeout works. We want the pool configured so it has some fixed number of connections available but additional threads don't wait at all. Since non-positive values are treated as infinite timeouts, we set the ConnectionRequestTimeout to 1 ms so it checks for a free connection and fails near instantly if the pool is drained.
In 4.5 this approach worked fine, but in 5.2 it seems the timeout is checked prematurely and fails when there should still be connections available. Here's a test that reproduces the issue fairly reliably for me:
class Test {
static CloseableHttpClient client;
public static void main(String[] args) {
var requestConfig = RequestConfig.custom()
.setConnectionRequestTimeout(Timeout.ofMilliseconds(1))
.build();
var connManager = new PoolingHttpClientConnectionManager();
connManager.setMaxTotal(50);
connManager.setDefaultMaxPerRoute(50);
client = HttpClients.custom()
.setDefaultRequestConfig(requestConfig)
.setConnectionManager(connManager)
.build();
for (int i = 0; i < 45; i++) {
new Requester().start();
}
}
static class Requester extends Thread {
String result;
@Override
public void run() {
try {
result = client.execute(new HttpGet("https://www.example.com"), ClassicHttpResponse::toString);
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
This outputs a series of stack traces like this:
org.apache.hc.client5.http.impl.classic.RequestFailedException: Request execution failed
at org.apache.hc.client5.http.impl.classic.InternalExecRuntime.acquireEndpoint(InternalExecRuntime.java:131)
at org.apache.hc.client5.http.impl.classic.ConnectExec.execute(ConnectExec.java:125)
at org.apache.hc.client5.http.impl.classic.ExecChainElement.execute(ExecChainElement.java:51)
at org.apache.hc.client5.http.impl.classic.ProtocolExec.execute(ProtocolExec.java:192)
at org.apache.hc.client5.http.impl.classic.ExecChainElement.execute(ExecChainElement.java:51)
at org.apache.hc.client5.http.impl.classic.HttpRequestRetryExec.execute(HttpRequestRetryExec.java:113)
at org.apache.hc.client5.http.impl.classic.ExecChainElement.execute(ExecChainElement.java:51)
at org.apache.hc.client5.http.impl.classic.ContentCompressionExec.execute(ContentCompressionExec.java:152)
at org.apache.hc.client5.http.impl.classic.ExecChainElement.execute(ExecChainElement.java:51)
at org.apache.hc.client5.http.impl.classic.RedirectExec.execute(RedirectExec.java:116)
at org.apache.hc.client5.http.impl.classic.ExecChainElement.execute(ExecChainElement.java:51)
at org.apache.hc.client5.http.impl.classic.InternalHttpClient.doExecute(InternalHttpClient.java:170)
at org.apache.hc.client5.http.impl.classic.CloseableHttpClient.execute(CloseableHttpClient.java:245)
at org.apache.hc.client5.http.impl.classic.CloseableHttpClient.execute(CloseableHttpClient.java:188)
at org.apache.hc.client5.http.impl.classic.CloseableHttpClient.execute(CloseableHttpClient.java:162)
at Test$Requester.run(Test.java:40)
Caused by: org.apache.hc.core5.util.DeadlineTimeoutException: Deadline: 2024-01-10T02:02:37.621+0000, -15 MILLISECONDS overdue
at org.apache.hc.core5.util.DeadlineTimeoutException.from(DeadlineTimeoutException.java:49)
at org.apache.hc.core5.pool.StrictConnPool.lease(StrictConnPool.java:222)
at org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager.lease(PoolingHttpClientConnectionManager.java:298)
at org.apache.hc.client5.http.impl.classic.InternalExecRuntime.acquireEndpoint(InternalExecRuntime.java:103)
... 15 more
Increasing the timeout to 25 ms reduces the frequency of timeouts, but they still happen on occasion. Is this a bug? Is there another way to configure the pool with a low (or no) timeout?
Upvotes: 4
Views: 3279
Reputation: 2545
Looks like the internals of the PoolingHttpClientConnectionManager
have been rewritten significantly, and now in the 5.x releases there are two types of exceptions related to the request connection timeout: newly introduced DeadlineTimeoutException
and ConnectionRequestTimeoutException
(which is similar to ConnectionPoolTimeoutException
from pre-5), and by increasing the number of threads to more than 50 in your example, you will most certainly get the latter rather than the former.
I'm not an expert in Apache HttpClient and won't go into detail, but to me, this is a bug close to https://issues.apache.org/jira/browse/HTTPCORE-754 (but not the same).
One workaround I've found to get rid of DeadlineTimeoutException
is to configure the pool with the PoolConcurrencyPolicy.LAX
policy, although this does not guarantee 100% pool utilization. There is a bit about this in the migration guide.
Upvotes: 2
Reputation: 99
Why are you using connection timeout to achieve it? Use this on HttpClients
.evictIdleConnections(Timeout.ofMilliseconds(1))
Upvotes: 0