Reputation: 1
I'm currently battling with a performance issue in ActiveMQ Artemis, and I'm a bit lost. Here's the scenario:
I'm running a single ActiveMQ Artemis server with multiple consumers using JMS. The messages are configured to use message grouping for sequential processing.
Initially the message processing is smooth, and messages are consumed in the expected order, and all the consumers are utilized.
However, after a period of time, I've observed that certain consumers start waiting for one consumer to finish processing a message before starting on a new one even if there was messages available from other message groups.
Given that processing a message can take up to an hour in the worst case, this waiting behavior significantly impacts the overall processing performance.
I'm reaching out to the community for insights and suggestions on potential tweaks or optimizations that could help address this performance degradation. Are there specific configurations or settings that I should be exploring?
Artemis version is currently 2.26.0, but this issue was present also in other versions I tested.
Here's the picture of the issue. Blue lines present message being processed in a consumer relative to time on X-axis.
I've already experimented with adjusting the consumerWindowSize
parameter as well as changing useGlobalPools
to false, but unfortunately I haven't seen any positive effects so far.
The problem has been present with 2.18.0 as well and the versions we used before that (=< 2.14). I should try to test with the latest version, but since it has been present so long I'm thinking it's likely to be some kind of bad configuration on my part.
Upvotes: 0
Views: 398
Reputation: 35122
The basic semantics of a queue are first-in-first-out (i.e. FIFO). Message grouping doesn't change that. This presents a challenge because since messages in the same group go to the same consumer then one slow consumer can slow down message consumption for the entire queue.
Typically slow consumers are configured with a lower consumerWindowSize
(e.g. 0
). This prevents the slow consumers from prefetching a lot of messages and starving other consumers. However, with slow consumers and message grouping I recommend that you actually increase the consumerWindowSize
on the slow consumers. Otherwise a slow consumer may block the dispatch of a messages that can only go to itself and therefore prevent any other consumers from getting messages.
To be clear, the broker will dispatch the messages from a queue one at a time to the consumers. The consumers don't have to acknowledge the messages (i.e. fully process them) in order for the broker to keep dispatching messages from the queue. There just has to be a consumer that is able to receive the message - either for immediate processing or to store in their local prefetch buffer waiting for the currently processing message to be acknowledged. If the broker cannot dispatch the message to any consumer then it will just wait for a consumer to become available. For example, if it is trying to dispatch a grouped message and the only consumer that can accept this message can't accept it (e.g. its prefetch buffer is full) then the broker will just wait for that consumer to be free. It won't just skip that message and try to dispatch another one as that would violate the FIFO semantics of the queue.
The simplest way to test this would be to set consumerWindowSize=-1
(i.e. disabled). The broker will immediately dispatch any grouped message to the proper consumer (even if that consumer is slow) and then move to dispatch the next one preventing starvation.
It is also worth ensuring that groups can be automatically rebalanced. This isn't enabled by default, but I would recommend it in your case.
Upvotes: 0