Abo_3abd
Abo_3abd

Reputation: 13

Java RMI client loss connection

I am building a distributed application with Java RMI.

Whenever an agent register to the server, the server does some calculation (e.g, adds the agent to an array). And the agents keep sending liveliness messages to the server.

What I want is that whenever an agents stops sending this message or the connection gets lost, the server notice that and recalculate (e.g, deletes the client from the list)

What is the best way to do this?

I saw some solutions ping the clients or use multi-threads. Any advice is appreciated

Upvotes: 1

Views: 2500

Answers (2)

user207421
user207421

Reputation: 310957

Have your server issue each client with a unique remote object, say a remote session for example, that implements Unreferenced, and have the unreferenced() method implementation implement the logic you require.

NB There isn't really such a thing as 'loses connection' in RMI. Its underlying connections come and go all the time, subject to connect pooling, and at the API level there is no connection at all.

Upvotes: 1

CodeBlind
CodeBlind

Reputation: 4569

There's a couple of ways you could do this, and each have associated benefits and drawbacks.

1) Bidirectional Lifeline

In this solution, your server and client would both have lifeline() methods that effectively block forever. If the connection goes down, a RemoteException will get thrown immediately in this method call. If the client initiated the call, it can either attempt to reconnect to the server or exit. The server will have to maintain some logic so that server.lifeline() called by the client can return if the client disconnects (detected by exception when server calling client.lifeline() results in exception), otherwise you'll have dangling threads on the server that block forever, consuming resources until your server runs out of memory.

Benefits: Immediate notification when connections die.

Drawbacks: A new thread on server and client must be maintained for every connection, more complicated disconnect logic.

2) Heartbeat

In this case, you still have one thread on each client calling server.ping() every few seconds, but you only need one thread on the server calling client.ping(), every few seconds per connected client on average. If any ping() call results in an exception, you know that you are disconnected.

Benefits: Simpler connection logic, fewer resources consumed on the server due to fewer threads managing connection state.

Drawbacks: Slower disconnect response times (up to delay between ping calls), susceptible to long TCP timeouts during failed ping() calls.

So what's the best route?

I think this depends totally on what your app is trying to do. If you need the absolute most up-to-date information on the state of your network as soon as possible, then the first approach will give you that. If you can tolerate a delay in detecting disconnects, the second approach is better.

A third option could be to look into RMI's usage of Server/SocketFactory implementations and try to link remote objects to specific sockets created by RMI. I've done this before but it's not a quick solution, although it can give you the best of both worlds with the least overhead.

Upvotes: 2

Related Questions