Reputation: 141574
I am hoping to use EasyNetQ for a consumer task where I will read a batch of messages (say 100) from one or more RabbitMQ queues, and process batches upstream. I don't want to Ack
the messages until the upstream process completes , because if my process crashes while the upstream is happening, I'd like the messages to not get dropped.
My test code at the moment looks like (pseudocode - have omitted synchronization detail and so on):
var bus = RabbitHutch.CreateBus("host=localhost");
var consumer = bus.Advanced.Consume(queue, (body, properties, info) =>
{
MessageHandler(body, properties, info);
});
// ...
static void MessageHandler(ReadOnlyMemory<byte> content, MessageProperties props, MessageReceivedInfo info)
{
messageBatch.Add((content, info));
if (messageBatch.Count >= 100)
// trigger another thread to do batch processing
}
The default behaviour of EasyNetQ is that the bus.Advanced.Consume
function does an Ack
as soon as the handler function returns. Is it possible to turn that off, and if so, then what function can I call to send the Acks later on?
NB. RabbitMQ Consume Messages in Batches and Ack them all at once addresses this question for RabbitMQ.Client
, but I would like to know if it's possible in the EasyNetQ
client.
Upvotes: 0
Views: 54
Reputation: 141574
It's possible to do this by using the AdvancedBus.Consume
function, with an async
handler that returns Ack/Nak/Reject; and manually suspend the handlers via TaskCompletionSource
until a batch has completed.
However, this approach still doesn't doesn't take advantage of the underlying RabbitMQ.Client
's ability to batch-Ack, so it will not be as performant as using RabbitMQ.Client
directly.
There is an undocumented class PullingConsumer
introduced in issue #1073. However, as of v7.8.0, these functions are extremely slow: PullAsync
and PullBatchAsync
can only retrieve about 5 messages per second, on my same test setup where AdvancedBus.Consume
can retrieve about 3000 messages per second. So that is not usable in practice.
// In startup
bus!.Advanced.Consume(new Queue(queueName), Handler2, opts => opts.WithPrefetchCount(1000));
private async Task<AckStrategy> Handler2(ReadOnlyMemory<byte> message,
MessageProperties pr, MessageReceivedInfo info,
CancellationToken cancellationToken)
{
Payload payload = YourUnpacker(message);
// Put into queue and hang until batch is ready to go
PayloadTask[] tasks = new PayloadTask[0];
var tcs = new TaskCompletionSource<AckStrategy>(TaskCreationOptions.RunContinuationsAsynchronously);
lock (_lock)
{
batchQueue.Enqueue(new PayloadTask(tcs, payload));
if (batchQueue.Count >= batchSize)
{
tasks = batchQueue.ToArray();
batchQueue.Clear();
}
}
if ( tasks.Any() )
// Note, this will also set completion for the current task
await FireBatch(tasks, cancellationToken);
return await tcs.Task;
}
private async Task FireBatch(PayloadTask[] tasks, CancellationToken cancellationToken)
{
try
{
IEnumerable<Payload> payloads = tasks.Select(o => o.payload);
await YourBatchProcessFunction(payloads, cancellationToken);
foreach (var task in tasks)
task.tcs.SetResult(AckStrategies.Ack);
}
catch (Exception ex)
{
foreach (var task in tasks)
task.tcs.SetResult(AckStrategies.NackWithRequeue);
}
}
class PayloadTask
{
public TaskCompletionSource<AckStrategy> tcs;
public Payload payload;
public PayloadTask(TaskCompletionSource<AckStrategy> _tcs, Payload _payload)
=> (tcs, payload) = (_tcs, _payload);
}
Upvotes: 0