Reputation: 305
I have a question concerning the scalability within a microservice architecture:
Independent from the inter service communication style (REST HTTP or messsage based), if a service scales, which means several replicas of the service are going to be launched, how is a shared main memory realized? To be more precise, how can instance1 access the memory of instance2?
I am asking this question because a shared non in-memory database between all instances of a service can be way to slow in read and write processes.
Could some expert in designing scalable system architecture explain, what exactly is the difference in using the (open source) Redis solution or using the (open source) Hazlecast solution to this problem?
And as another possible solution: Designing scalable systems with Rabbitmq:
Is it feasible to use message queues as a shared memory solution, by sending large/medium size objects within messages to a worker queue?
Thanks for your help.
Upvotes: 0
Views: 597
Reputation: 129065
several instances of the service are going to be launched, how is a shared main memory realized? To be more precise, how can instance1 access the memory of instance2?
You don't. Stateless workload scales by adding more replicas. It is important that those replicas are in fact stateless and loosely coupled - shared nothing. All replicas can still communicate with an in-memory service, or database, but that stateful service is it's own independent service (in a microservice architecture).
what exactly is the difference in using the (open source) Redis solution or using the (open source) Hazelcast solution to this problem?
Both is a valid solution. Which is best for you depends on what libraries, protocols, or integration patterns is best for you.
Is it feasible to use message queues as a shared memory solution, by sending large/medium size objects within messages to a worker queue?
Yes, that is perfectly fine. Alternatively you can use a distributed pub-sub messaging platform like Apache Kafka or Apache Pulsar
Upvotes: 1