Reputation: 82
Have server microservice running with diagram:
But I monitor see that, init 24 threads then have request threads increase 32 and when processed threads reduction into 24 threads.
The process of closing thread is slow that release memory slow, too!!
How improment that problem? Thanks for view.
Upvotes: 3
Views: 10791
Reputation: 82
Thanks @Joakim Erdfelt,
I have update image my diagram below:
I see the log seems to be jetty as it only relates to the socket task that receives the request and the response, and it has nothing to do with processing data in the job
log:
2020-07-14 10:43:47.264:DBUG:oeji.ManagedSelector:dev.cash24-connector-15: Selector sun.nio.ch.KQueueSelectorImpl@a9a2d79 waiting with 1 keys 2020-07-14 10:43:47.264:DBUG:oeji.ManagedSelector:dev.cash24-connector-15: Selector sun.nio.ch.KQueueSelectorImpl@a9a2d79 woken up from select, 1/1/1 selected 2020-07-14 10:43:47.264:DBUG:oeji.ManagedSelector:dev.cash24-connector-15: Selector sun.nio.ch.KQueueSelectorImpl@a9a2d79 processing 1 keys, 0 updates 2020-07-14 10:43:47.264:DBUG:oeji.ManagedSelector:dev.cash24-connector-15: selected 1 sun.nio.ch.SelectionKeyImpl@45dd25a SocketChannelEndPoint@5a66441f{l=/127.0.0.1:86,r=/127.0.0.1:50974,OPEN,fill=FI,flush=-,to=9/30000}{io=1/1,kio=1,kro=1}->HttpConnection@159cbb18[p=HttpParser{s=START,0 of -1},g=HttpGenerator@2443cd34{s=START}]=>HttpChannelOverHttp@54a1b123{s=HttpChannelState@509e2f6f{s=IDLE rs=BLOCKING os=OPEN is=IDLE awp=false se=false i=true al=0},r=1,c=false/false,a=IDLE,uri=null,age=0} 2020-07-14 10:43:47.264:DBUG:oeji.ChannelEndPoint:dev.cash24-connector-15: onSelected 1->0 r=true w=false for SocketChannelEndPoint@5a66441f{l=/127.0.0.1:86,r=/127.0.0.1:50974,OPEN,fill=FI,flush=-,to=9/30000}{io=1/0,kio=1,kro=1}->HttpConnection@159cbb18[p=HttpParser{s=START,0 of -1},g=HttpGenerator@2443cd34{s=START}]=>HttpChannelOverHttp@54a1b123{s=HttpChannelState@509e2f6f{s=IDLE rs=BLOCKING os=OPEN is=IDLE awp=false se=false i=true al=0},r=1,c=false/false,a=IDLE,uri=null,age=0} 2020-07-14 10:43:47.265:DBUG:oeji.ChannelEndPoint:dev.cash24-connector-15: task CEP:SocketChannelEndPoint@5a66441f{l=/127.0.0.1:86,r=/127.0.0.1:50974,OPEN,fill=FI,flush=-,to=9/30000}{io=1/0,kio=1,kro=1}->HttpConnection@159cbb18[p=HttpParser{s=START,0 of -1},g=HttpGenerator@2443cd34{s=START}]=>HttpChannelOverHttp@54a1b123{s=HttpChannelState@509e2f6f{s=IDLE rs=BLOCKING os=OPEN is=IDLE awp=false se=false i=true al=0},r=1,c=false/false,a=IDLE,uri=null,age=0}:runFillable:BLOCKING 2020-07-14 10:43:47.265:DBUG:oejuts.EatWhatYouKill:dev.cash24-connector-15: EatWhatYouKill@3fb1549b/SelectorProducer@ea6147e/PRODUCING/p=false/QueuedThreadPool[dev.cash24-connector]@9d5509a{STARTED,0<=6<=6,i=1,r=0,q=0}[NO_TRY][pc=0,pic=0,pec=1,epc=0]@2020-07-14T10:43:47.265+07:00 m=PRODUCE_EXECUTE_CONSUME t=CEP:SocketChannelEndPoint@5a66441f{l=/127.0.0.1:86,r=/127.0.0.1:50974,OPEN,fill=FI,flush=-,to=10/30000}{io=1/0,kio=1,kro=1}->HttpConnection@159cbb18[p=HttpParser{s=START,0 of -1},g=HttpGenerator@2443cd34{s=START}]=>HttpChannelOverHttp@54a1b123{s=HttpChannelState@509e2f6f{s=IDLE rs=BLOCKING os=OPEN is=IDLE awp=false se=false i=true al=0},r=1,c=false/false,a=IDLE,uri=null,age=0}:runFillable:BLOCKING/BLOCKING 2020-07-14 10:43:47.265:DBUG:oejut.QueuedThreadPool:dev.cash24-connector-15: queue CEP:SocketChannelEndPoint@5a66441f{l=/127.0.0.1:86,r=/127.0.0.1:50974,OPEN,fill=FI,flush=-,to=10/30000}{io=1/0,kio=1,kro=1}->HttpConnection@159cbb18[p=HttpParser{s=START,0 of -1},g=HttpGenerator@2443cd34{s=START}]=>HttpChannelOverHttp@54a1b123{s=HttpChannelState@509e2f6f{s=IDLE rs=BLOCKING os=OPEN is=IDLE awp=false se=false i=true al=0},r=1,c=false/false,a=IDLE,uri=null,age=0}:runFillable:BLOCKING startThread=0
but I see thread close is slow in monitor when clien have been response (thread 29 opened và closed then)
log:
2020-07-14 10:44:47.275:DBUG:oejut.QueuedThreadPool:dev.cash24-connector-29: shrinking QueuedThreadPool[dev.cash24-connector]@9d5509a{STARTED,0<=6<=6,i=1,r=0,q=0}[NO_TRY] 2020-07-14 10:44:47.276:DBUG:oejut.QueuedThreadPool:dev.cash24-connector-29: Thread[dev.cash24-connector-29,5,main] exited for QueuedThreadPool[dev.cash24-connector]@9d5509a{STARTED,0<=5<=6,i=0,r=0,q=0}[NO_TRY]
Upvotes: -1
Reputation: 49462
Your diagram is incorrect.
Jetty has a single ThreadPool, for all operations.
A Thread is a Thread is a Thread.
There is no distinction between selector / acceptor / requests / async processing / async read / async write / websocket / proxy / client / etc.
At last count, in Jetty 9.4.30.v20200611, there are approximately 93 different things in Jetty that can use a Thread from the ThreadPool, and it all depends on what your application is doing, what network protocols you are using, and what features of the various APIs within Jetty you decide to use.
Back to your Diagram.
Get rid of the Connection Queue, that box makes no sense. Not even sure what you are attempting to document there.
When a Thread is being used for an Acceptor purposes it does not participate in the request/response handling at all. It accepts the connection and hands it off to the Managed Selector to process the actual accept and subsequent selector management for that new connection.
The selectors are not threads. There's a Selector Manager that uses a thread, it manages the selectors that the NIO layer works with.
Having more then 1 selector configured is only useful if you are approaching 60,000 active concurrent selector events on a multi-core machine with over 8 cores dedicated to Jetty. (do not make the mistake of equating concurrent connections to concurrent selector events, you can easily have 200,000 concurrent connections with a concurrent selector event maximum of 16. you need to be monitoring your JVM in production to know what the application selector load actually is)
Jetty uses the "Eat What You Kill" Thread Execution Strategy, which means the "Thread Pool Queue" is not as simple as your diagram has it (with a fetch/push).
Thread creation is an expensive thing (in terms of time), so they are kept alive in the pool for as long as they can. Thread creation, on properly tuned JVMs, can often take more time then a GC operation. (yes, we know this is a controversial statement, but our experience on many different machines, environments, and JVMs over the past 20 years have shown this to be consistently true, even on modern JVMs like OpenJDK 14)
Thread creation within the Thread Pool can happen in bursts, depending on load. The "load" can be new connections, overall traffic on the connections, or even as simple as the demands you put on the various APIs within your application.
Idle Thread removal is intentionally stair-stepped over time to reduce and/or eliminate the extreme cost of Thread creation seen during bursts in load.
Jetty uses the org.eclipse.jetty.util.thread.ThreadPool
interface to work with the Thread Pools.
Each ThreadPool
has a ThreadPoolBudget
which the various APIs within Jetty participate with to indicate required operational threading requirements. There are many APIs within Jetty that once you start using them they automatically trigger a need to "reserve" X number of threads in the ThreadPool
to always be available for that API. Example: you have a new HTTP/2 connection, it will increase the "reserved thread" count on the Thread Pool by 1 for the life of the physical connection (to handle the HTTP/2 sessions for the various sub-requests), the existence of the physical connection does not mean a thread is being used within the Thread Pool, only once a selector and/or API usage triggers it, and then it just uses the Thread Pool normally, using whatever thread is currently available. This allows the ThreadPool implementation to manage this need for "reserved threads". This ensure there is always a thread to process the low level behavior that the HTTP/2 sessions and sub-requests (in this example it easiest to think of the physical connection as a sub-selector for the HTTP/2 sessions its managing). This "reserved thread" concept is crucial for proper operation in many critical tasks, otherwise you experience a thread starvation and many critical tasks.
The minimum and maximum threads your application needs is dependent on behavior of your application and the load it is experiencing, not some arbitrary startup configuration.
Depending on the implementation of org.eclipse.jetty.util.thread.ThreadPool
you choose, you have different options on tweaking how it handles things like idle threads, and idle thread removal.
In QueuedThreadPool
(the most common ThreadPool
in use in the world of Jetty) the idle timeout controls when a Thread in the pool is both identified as "idle" and "suitable to be stopped".
The idle thread cleanup will remove 1 idle thread at a time, at each idle timeout interval.
The general advice is that every time a Thread has to be created, you make your application slower to produce responses to your clients.
Another point of contention with developers wanting to tweak threading in Jetty is that there is absolutely no relationship between a single thread and a single request/response exchange. A single request/response exchange can be handled across many Threads (and sometimes multiple threads, depending on the APIs you choose to use).
Attempting to control anything with a request/response handling by manipulating the Thread Pool will not work (such as attempting to limit the number of active concurrent requests by setting a low maximum number of threads).
Also be careful of ThreadLocal usage within your applications, they do work, but you must understand the scope of the Thread you are attaching it to to have any long term success with them.
Upvotes: 13