Dineshkumar
Dineshkumar

Reputation: 294

Sharing a java object across a cluster

My requirement is to share a java object across a cluster.

I get Confused

with the constraint that

Can any one come up with the best option with these constraints in mind.

Upvotes: 6

Views: 4824

Answers (3)

cpurdy
cpurdy

Reputation: 1236

It's not open source, but Oracle Coherence would easily solve this problem.

If you need an implementation of JCache, the only one that I'm aware of being available today is Oracle Coherence; see: http://docs.oracle.com/middleware/1213/coherence/develop-applications/jcache_part.htm

For the sake of full disclosure, I work at Oracle. The opinions and views expressed in this post are my own, and do not necessarily reflect the opinions or views of my employer.

Upvotes: 1

vcetinick
vcetinick

Reputation: 2017

It can depend on the use case of the objects you want to share in the cluster.

I think it comes down to really the following options in most complex to least complex

Distributed cacheing http://www.ehcache.org

Distributed cacheing is good if you need to ensure that an object is accessible from a cache on every node. I have used ehache to distribute quite successfully, no need to setup a terracotta server unless you need the scale, can just point instances together via rmi. Also works synchronously and asynchronously depending on requirements. Also cache replication is handy if nodes go down so cache is actually redundant and dont lose anything. Good if you need to make sure that the object has been updated across all the nodes.

Clustered Execution/data distribution http://www.hazelcast.com/

Hazelcast is also a nice option as provides a way of executing java classes across a cluster. This is more useful if you have an object that represents a unit of work that needs to be performed and you dont care so much where it gets executed.

Also useful for distributed collections, i.e. a distributed map or queue

Roll your own RMI/Jgroups

Can write your own client/server but I think you will start to run into issues that the bigger frameworks solve if the requirements of the objects your dealing with starts to get complex. Realistically Hazelcast is really simple and should really eliminate the need to roll your own.

Upvotes: 4

ozma
ozma

Reputation: 1803

  • It is just an idea. you might want to check the exact implementation.
  • It will downgrade performance but I don't see how it is possible to avoid it.
  • It not an easy one to implement. might be you should consider load balance instead of clustering.

you might consider RMI and/or dynamic-proxy.

  • extract interface of your objects.
  • use RMI to access the real object (from all clusters even the one that actually holds the object)
  • in order to create RMI for an existing code you might use dynamic-proxy (again..not sure about implementation)

*dynamic proxy can wrap any object and do some pre and post task on each method invocation. in this case it might use the original object for RMI invocation

  • you will need connectivity between clusters in order to propogate the RMI object.

Upvotes: 0

Related Questions