ankurlu
ankurlu

Reputation: 171

Advice on Solution using HazelCast Cache

Need some advice on Application architecture front.

I am working on a application which uses Hazelcast as Caching Solution. Technology Stack for the application includes Hibernate, Spring. My requirement is that end Users should be able to get Data from Cache .As part of Data Loading Process; all Tables are dumped one by one in to Hazelcast in their respective caches

Data needed by end Users is processed form of data i.e. data after joining Tables based on Foreign Key columns .This join is done by querying Cache using Predicates. After manipulating cache, data needs to be created and presented to User. I need some advice on how to setup my application architecture so that it's scalable. Below are the approaches that I can think of.

a) Create a HazelCast Cluster Instance (Hazelcast.newHazelcastInstance() ) and let it run on a JVM. Let different end users connect to the Instance by using "hazelCastClient" (HazelcastClient.newHazelcastClient( clientConfig )) running on their client machines and get the Data from Cache ,manipulate it on Client and use it. At one time close to 150 clients can be there querying the cache which means I needs to make 150 hazelCastClient connections. This seems to be a bottle neck. Please suggest if making large number of connections can be problematic or not?

b) Create a HazelCast Cluster Instance (Hazelcast.newHazelcastInstance() ) and let it run on a JVM. Create a REST Server running on a separate JVM and having an Object of a hazelCastClient and let all end Users hit the REST Server and get the response as JSON.I can manipulate the Data as part of REST Call and present manipulated Data as Output to End Users. Since the REST Server has a hazelCastClient connection, it can connect to HazelCast cluster running on different JVM and query it. I can scale it by having "n" REST Servers running behind a load balancer where each REST server holds a connection to cluster. The issue is this architecture seems to be having two hops, one from End User to REST Server and another from REST Server (hosting hzCastClientConnection) to HazelCast Cluster. Also if multiple requests hit REST Server, can multiple requests be server by single client Connection?

Any other approach that I can follow?

Upvotes: 3

Views: 648

Answers (2)

Jagan Sivanesan
Jagan Sivanesan

Reputation: 59

Further to add-on, if the objective is to makesure that only one Hazelcast instance is running across all the Servers or One-per JVM, both are good.

e.g. if there are 3 app servers in your cluster, creating an instance per JVM & the data are replicated across sounds better. & within the application server all the applications (WAR/EAR) can use the data from their own instance.

Upvotes: 0

noctarius
noctarius

Reputation: 6094

The recommended setup for Hazelcast is normally a separated cluster + clients to connect into this cluster. That way you can scale REST API servers and caches independently.

There is not even a need to have a Hazelcast node running locally to connect to since not all nodes have all data. That means there is a high chance you get a network roundtrip anyways.

Depending on the frequency of changes in your caches data (at the moment it sounds quite stable) you might want to activate near cache in the client to keep the most recently or frequently used elements locally (those are automatically invalidated when data change, however it adds a little bit of chance to read stale data for a short amount of time).

I hope this helps.

Upvotes: 1

Related Questions