Reputation: 541
I’ve developed an entirely in-memory application using .NET. Data is loaded from persistent storage once at start-up, and from then only changes to objects are trickled back to persistent storage from the application tier through a work queue. It’s working great and benchmarking even better (100k api transactions per second). I realise this is a non-conventional and difficult to scale architecture. It was something of an experiment =)
(Current architecture on the left, desired architecture on the right)
Now I’m starting to think about redundancy. I’d like to run two application servers side-by-side and load balance between them. This would mean keeping all in-memory objects synchronised - probably through persistent tcp connections shuttling binary serialized objects back and forth. Eventual consistency is ok. Conceptually I can see how this would work after both app servers are cold started to the same state from persistent storage, however I am having difficulty conceptualizing how I would instantiate and synchronize a new application server node while requests are streaming in. I guess this sounds like a snapshot + transaction log kind of thing?
Does this sound achievable? Is this kind of architecture in use anywhere of note?
Upvotes: 1
Views: 146
Reputation: 6359
There are products out there which meet your needs. You haven't mentioned your nfrs so I don't know if you want to hand roll just for efficiency. Something like infinispan should work for you . What will be more challenging is transactionality of data given you're in the in memory space. It would be worth exploring how technical and destruction testing could be achieved.
Upvotes: 1