Reputation: 213
I'm working on a task where I need to make multiple request to an HTTPS url from my java program and read the response. This process is to be repeated multiple times with different requests.
The latency(time difference between request and response) for each request is around 300ms if I use just 1 thread - making requests sequentially.And the throughput is around 3.3 requests per second.
However, as the goal is to get a high throughput number I have decided to go with multiple threads each making a request at given point of time.
Some important details:
I am using only those number of URL instances as number of threads. The idea is that each thread uses a single URL instance and calls new URL(url).openConnection() every time it makes a request.
I'm closing input stream using inputStream.close() each time after reading the response and this closing will make the socket reusable.
I'm not calling httpConnectionURL.disconnect() as this will close the underlying socket.
I have set http.maxConnections to number of threads using System.setProperty("http.maxConnections", threadCount);
I am also checking the number of connections open at any given point of time using "netstat -a | grep | wc -l" and this is always giving a number equat to or above threadcount as expected.
Even after doing all these, I am not getting an expected throughput. For 1 thread when I am getting a throughput of 3.3 I assume using 100 threads I should get a throughput of at least 300 per second.
Can anyone kindly explain me where am I going wrong. or any other better solutions. Below is my code snippet.
Main Class:
public static void main(String[] args)
{
URL[] urlConnArray = new URL[threadCount];
for(int j = 0;j < urlConnArray.length;j++)
urlConnArray[j] = new URL(regURL);
System.setProperty("http.keepalive", "true");
System.setProperty("http.maxConnections", String.valueOf(threadCount));
for(int i=0;i<1000000;i++)
{
Thread regThread = new Thread(new RegisterThread(urlConnArray[i]));
regThread.start();
}
}
RegisterThread Class:
public class RegisterThread implements Runnable
{
httpConn = (HttpURLConnection) urlConnArray[i].openConnection();
httpConn.setUseCaches(false);
httpConn.setDoOutput(true);
httpConn.setRequestMethod("POST");
httpConn.setRequestProperty("Content-Type", "application/json" );
//Prepare the request body.....
long requestTime = System.currentTimeMillis();
InputStream is = httpConn.getInputStream();
long responseTime = System.currentTimeMillis();
long latency = responseTime - requestTime;
reader = new BufferedReader(new InputStreamReader(is));
StringBuffer response = new StringBuffer();
String line = "";
while ((line = reader.readLine()) != null)
{
response.append(line);
}
is.close();
}
Upvotes: 0
Views: 2031
Reputation: 38910
Unfortunately, your assumption is wrong.
For 1 thread when I am getting a throughput of 3.3. I assume using 100 threads I should get a throughput of at least 300 per second.
The performance of multi-threading will depend on number of cores of CPU. If you are running your application on one core CPU, you may not notice any gain with multi-threading. Even things will become worse due to context switching between multiple threads due to single core CPU. In that case, your results are inferior to single thread processing for the same use case.
If you have 100 core CPU, you can achieve results similar to 300-500ms per thread amusing that threads are not using many shared locks on objects/methods.
If you want to fine tune the performance, I would suggest below changes ( in General and not specific to your problem)
1) Use java advanced threading features like ExecutorService etc and declare the thread pool count as number of cores of your CPU.
2) Avoid shared locks as much as possible between threads.
Have a look at java support for multi-threaing in multi core CPUs at java support for parallel processing
Example of executor service code.
// Here replace 10 with number > number of CPU cores for better performance
ExecutorService executorService = Executors.newFixedThreadPool(10);
executorService.execute(new Runnable() {
public void run() {
// Here add your business logic
System.out.println("Asynchronous task");
}
});
executorService.shutdown();
Have a look at Executor Service usage
Upvotes: 0
Reputation: 5313
Too Many Threads
The code is creating far too many concurrent threads at 1,000,000 (the code loops 1M times creating a new thread and starting each time).
The URL definitions being pooled and a new connection is being opened each time
You may also get better performance from some of the custom REST clients available from various sources (Apache, Grizzly, Netty etc) than from the inbuilt JDK URL / Connnection classes .
I/O Bound
The application is I/O bound, rather than CPU bound.
This app should have a lot more threads than cores, but not 1 million! (I'm surprised it's not running out of memory).
The reasons you should use more threads
The I/O blocks the current thread, while it waits on data from a remote System
The CPU can be better utilized doing other work during that period
Having more threads than CPU cores for blocking I/O will, therefore, result in both better CPU usage, and better I/O.
Paul Tyma's overview on synchronous I/O & Non-blocking I/O (2008) is a very useful read.
Java 8 Streaming
While JDK in Java 8 provides Streaming infrastructure for CPU bound tasks (not suitable here), we have a created library simple-react that is designed precisely for your purpose - improving system throughput where you have blocking IO. With simple-react you could create a Stream something like this
LazyReact streamBuilder = new LazyReact(threadCount); //create a Stream builder with x threads
streamBuilder.range(0,1000000)
.map(i-> new RegisterThread(urlConnArray[i])))
.forEach(url-> url.run());
Upvotes: 3