Reputation: 388316
I've a java web application running on an Tomcat server(Linux). In the production environment I'm facing some performance issue. At random intervals the jsvc process on which tomcat is running starts to run at 90-100% CPU. I'm unable to find out the trigger for this event. The server is a quad core system. Memory consumption does not indicate any abnormalities.
How can I monitor which thread(application stack trace) in the application is causing the problem?
I'm checking with jconsole and PSI Probe, but both are not giving any detailed information about what thread inside the application is causing the CPU usage abnormality.
Upvotes: 7
Views: 35413
Reputation: 879
There's a Linux tool called "threadcpu" which measures the cpu usage of each thread. And in case of a Java thread it uses jstack to get and print the thread name.
http://www.tuxad.com/blog/archives/2018/10/01/threadcpu_-_show_cpu_usage_of_threads/index.html
Upvotes: 1
Reputation: 15953
Another tool for showing the top cpu-consuming threads is jvmtop
Upvotes: 4
Reputation: 17629
VisualVM is what you're looking for. It ships with newer JDKs and allows you to monitor thread usage.
Upvotes: 4
Reputation: 7778
Just my 2 cents but I'm wondering if you are not experimenting a memory issue the CPU peaks could be the GC activity. So while you're monitoring your tomcat with jconsole have a look to the memory tab and check if the heap usage is not going to high.
Upvotes: 2
Reputation: 262494
You can get a stacktrace dump for all threads in any Java application by sending it a QUIT signal.
kill -QUIT [processId]
This will show up in the process' stdout.
Upvotes: 2
Reputation: 5059
One relatively easy way to do this (which may or may not work for your case -- depends on how long the behavior occurs):
When your app exhibits the behavior you want to debug (in this case, 90-100% CPU use) use jstack on the process ID:
http://download.oracle.com/javase/6/docs/technotes/tools/share/jstack.html
to examine what threads are running and in what methods they occur. If you do that a few times, it may be relatively easy to spot the culprit call chain. You can then just debug the entrance to that chain.
It's not necessarily the best or most elegant method, but it's very easy to do and it may be all you need. I would start there. It's akin to the "printf is the best debugger I've ever used" philosophy.
Upvotes: 7