Reputation: 221
I have an Artemis broker (2.10.1) running within a Docker container with one address but many (500+) queues. Each queue has a filter attribute, they don't overlap and routing type is multicast.
broker
- address: example
- multicast
- queue: dummy1 (filter: dummy=1)
- queue: dummy2 (filter: dummy=2)
- queue: dummy3 (filter: dummy=3)
- ...
When the client connects the cpu usage for client and broker goes from ~5% up to ~40%, according to htop (~20% normal + ~20% kernel). JMX reports ~10% CPU usage. When changing htop to tree view I can see the ~10% thread and many many 0.x% threads. Queues are empty, I'm neithing producing nor consuming a message. The whole system is (should be) in idle. The client establishes a single connection but one session per queue, resulting in 500+ sessions.
Whats wrong with my configuration? I can't see a reason for having such a CPU usage and load.
Update:
I did some more tests and it turns out that the CPU usage/load only happens if Docker is involved.
I am still doing more research, just wanted to let you know the current state to no longer blame Artemis for showing bad figures.
Btw, an interesting side information: During idle both applications using only an artemis dependency (core & jms) only a ping message every 30 seconds is being exchanged. The application embedded in Spring Boot using the starter-artemis is veeeeeery talkative. Can't yet tell you what this is about, except that I saw something about hornetq forced delivery seq. I assume that because of the amount of messages the CPU usage goes from <5% to 5-10%.
Update 2:
Spring Boot with starter-artemis is talkative because by default it is using the DefaultContainerFactory
, which polls. If there arent any messages within a given timeout it issues a force pull command, which is the reason for those hornetq forced delivery seq messages. In my core/jms tests I've used the asynchronous message handler, which is being provided by Spring Boot starter-artemis if you switch to the SimpleContainerFactory
.
Upvotes: 0
Views: 1363
Reputation: 191
The broker has been recently improved (eg https://issues.apache.org/jira/browse/ARTEMIS-2990) for scenario like these: I strongly suggest to try a more recent version.
If it won't going to fix your issue I suggest to run https://github.com/jvm-profiling-tools/async-profiler/ to sample CPU usage (it would include GC, compilation and native stack traces too).
Consider that the original address/queue management was using synchronized operations that would make Java threads to heavily contend on hot path: this can cause kernel/system CPU cycles to be spent to manage it (remember: contended java locks are backed by OS mutex) and such CPU usage won't appear on JMX.
Upvotes: 2