EM0
EM0

Reputation: 6357

RabbitMQ consumer slow when there are un-ACKed messages outstanding

I have a .NET Core console application that reads message from RabbitMQ and saves the data to a database. It uses RabbitMQ.Client assembly 5.1.0 and sets up an EventingConsumer like this:

var factory = new ConnectionFactory
{
    HostName = _hostName,
    UserName = _userName,
    Password = _password,
    RequestedHeartbeat = 20,
    AutomaticRecoveryEnabled = true,
    NetworkRecoveryInterval = TimeSpan.FromSeconds(10)
};

_connection = factory.CreateConnection();
_channel = _connection.CreateModel();
_channel.BasicQos(0, prefetchCount, false);

var consumer = new EventingBasicConsumer(_channel);
consumer.Received += HandleMessage;
_consumerTag = _channel.BasicConsume(_queueName, false, consumer);

If I call _channel.BasicAck on the message inside my HandleMessage method, i.e. as soon as each message is received, the rate of messages delivered is ~1500/second. However, I want to wait to ACK the message until it's saved to the DB. If I do that the rate drops to 300-500/second.

Saving to the DB is done on a separate thread and is not a bottleneck. HandleMessage only stores the message in memory to be saved later on the other thread. I've tried experimenting with a various prefetchCount values from 100 to 100,000 and it doesn't seem to matter. If I profile the application I can see that the AMQP session thread ("WorkPool-Session#1:Connection(...)" spends most of its time waiting on a WaitHandle in RabbitMQ.Client.ConsumerWorkService+WorkPool.Loop()

What am I doing wrong? How can I consume messages faster without ACKing them immediately? (The server is RabbitMQ 3.7.7)

Upvotes: 3

Views: 3228

Answers (2)

Hgrammer
Hgrammer

Reputation: 17

I get a rate of 48-51k on avg consistently. Either:

Remove back pressure (no QOS/prefetch), increase size of consumer internal queue.

You run the risk of losing messages here if you don’t have enough RAM, or the internal Queue gets filled.

Given enough RAM, you would need over 2.1 billion messages to fill up an internal queue with the max cap (which is Int.maxValue).

For guaranteed reliability regardless of hardware/system resources, use a high prefetch count and execute only async code in the consumers. Any blocking code should be handed off to separate thread. I achieve 20-25k msg per sec consistently this way.

My experiments were on a single 16gb ram machine with 145 million messages stored in the RabbitMQ queue for benchmark purposes.

Upvotes: 0

M.A. Hanin
M.A. Hanin

Reputation: 8084

The prefetchCount limits the amount of unacknowledged messages that will be delivered to your consumer. You can increase this value in order to receive more messages from the queue without acknowledging them.

However, since it seems the database persistence is the bottleneck, I expect the delivery rate to remain the same: Lets say it takes you 2ms to complete the database insertion per message, which is 500 insertions / sec. Once you maxed out the amount of outstanding messages (the prefetch count), you are ACKing at a rate of 500 messages/sec, therefore you'll get new messages at this rate. The buffer size doesn't matter much for thus bottleneck.

To improve system throughput, you can have additional consumers, or improve the throughput against the database in some way or another (i.e. bulk inserts, schema improvements, sharding), etc., but RabbitMQ has no means of holding on to an infinite number of unacknowledged (in-flight) messages.

Upvotes: 1

Related Questions