urbanhusky
urbanhusky

Reputation: 1385

Improve performance of event sourcing projections to RDBMS (SQL) via .NET

I'm currently working on a prototype in C# that utilises CQRS and event sourcing and I've hit a performance bottleneck in my projections to an SQL database.

My first prototype was built with Entity Framework 6, code first. This choice was made primarily to get going and because the read side would benefit from LINQ.

Every (applicable) event is consumed by multiple projections, which either create or update the corresponding entity.

Such a projection currently look like this:

public async Task HandleAsync(ItemPlacedIntoStock @event)
{
    var bookingList = new BookingList();
    bookingList.Date = @event.Date;
    bookingList.DeltaItemQuantity = @event.Quantity;
    bookingList.IncomingItemQuantity = @event.Quantity;
    bookingList.OutgoingItemQuantity = 0;
    bookingList.Item = @event.Item;
    bookingList.Location = @event.Location;
    bookingList.Warehouse = @event.Warehouse;

    using (var repository = new BookingListRepository())
    {
        repository.Add(bookingList);
        await repository.Save();
    }
}

This isn't very well performing, most likely for the reason that I call DbContext.SaveChanges() in the IRepository.Save() method. One for each event.

What options should I explore next? I don't want to spent days chasing ideas that might prove to be only marginally better.

I currently see the following options:

I expect to see millions of events because we plan to source a large legacy application and migrate data in the form of events. New projections will also be added often enough so the processing speed is an actual issue.

Benchmarks:

Updated benchmarks

Upvotes: 7

Views: 2362

Answers (3)

urbanhusky
urbanhusky

Reputation: 1385

I've seen performance improvements of several orders of magnitude, even with Entity Framework, when batching the commits and improving my overall projection engine.

  • Each projection is a separate subscription on the Event Store. This allows each projection to run at its maximum speed. Theoretical maximum of my pipeline on my machine was 40.000 events per second (possibly more, I ran out of events to sample with)
  • Each projection maintains a queue of events and deserialises the json to POCOs. Multiple deserialisations per projection run in parallel. Also switched to json.net from data contract serialisation.
  • Each projection supports the notion of a unit of work. The unit of work is committed after processing 1000 events or if the deserialisation-queue is empty (i.e. I am either at the head position or experienced a buffer underrun). This means that a projection commits more often if it is only a few events behind.
  • Made use of async TPL processing with interleaving of fetching, queueing, processing, tracking and committing.

This was achieved by using the following technologies and tools:

  • The ordered, queued and parallel deserialisation into POCOs is done via a TPL DataFlow TransformBlock with a BoundedCapacity somewhere over 100. Maximum degree of parallelism was Environment.ProcessorCount (i.e. 4 or 8). I saw a massive increase in performance with a queue size of 100-200 vs. 10: from 200-300 events to 10.000 events per second. This most likely means that a buffer of 10 was causing too many underruns and thus committed the unit of work too often.
  • Processing is dispatched asynchronously from a linked ActionBlock
  • Each time an event is deserialised, I increment a counter for pending events
  • Each time an event is processed, I increment a counter for processed events
  • The unit of work is committed after 1000 processed events, or whenever the deserialisation buffer runs out (number of pending events = number of processed events). I reduce both counters by the number of processed events. I don't reset them to 0 because other threads might have increased the number of pending events.

The values of a batch size of 1000 events and queue size of 200 are the result of experimentation. This also shows further options for improvement by tweaking these values for each projection independently. A projection that adds a new row for every event slows down considerably when using a batch size of 10.000 - while other projections that merely update a few entities benefit from a larger batch size.

The deserialisation queue size is also vital for good performance.

So, TL;DR:

Entity framework is fast enough to handle up to 10.000 modifications per second - on parallel threads, each. Utilise your unit of work and avoid committing every single change - especially in CQRS, where the projection is the only thread making any changes to the data. Properly interleave parallel tasks, don't just blindly async everything.

Upvotes: 5

Yves Reynhout
Yves Reynhout

Reputation: 2990

As the author of Projac, I suggest you have a look at what it has to offer, and steal what feels appropriate. I built it specifically because LINQ/EF are poor choices on the read model/projection side ...

Upvotes: 3

PhillipH
PhillipH

Reputation: 6222

Saving one record at a time to SQL Server is always going to be poorly performing. You have two options;

  1. Table Variable Parameters

Use a table variable to save multiple records to a stored procedure in a single call

  1. ADO Bulk Copy

Use the Bulk Insert ADO library to bulk copy the data in

Neither of which will benefit from being in EF apart from connection handling.

I would do neither if your data is simple key-value pairs; using an RDBMS is probably not a good fit. Probably Mongo\Raven or other flat data store would be better performing.

Upvotes: 2

Related Questions