Reputation: 8115
I'm creating an XML-RPC server in Java using the Redstone XML-RPC library. The server is responsible for kicking off commands on the system and then shooting back the response. For example, the user tells the server to do a ls
and the server returns a listing of files. Similarly, the user can do asynchronous commands and get back a process ID to query with later. All these methods are under one big subclass of Command
which is part of the main class RPCServlet extends XmlRpcServlet
.
In order to accomplish this, I'm keeping a global HashMap of instances
. When an async command comes in, I create an instance of class ProcessManager
which extends thread. I then add it to the instances
hash, run it, and return back the ID to the user. When a query for that ID comes in, I look it up in the global hash, and return either its full or partial status (depending on whether or not it's completed).
This works OK when it's just one request after another, but when you really begin to pound it with lots of requests that operate on the global hash instances
, it gets concurrency exceptions.
Now here are my points of confusion:
When a request comes in, does a new object of class Command
get created or is it one object sitting there forever taking in commands? Following on that, how does each request see the global hash of instances
? This is my init code
public class RPCServlet extends XmlRpcServlet {
@Override
public void init(ServletConfig servletConfig) throws ServletException {
super.init(servletConfig);
getXmlRpcServer().addInvocationHandler("Command", new Command(instances));
}
}
When I attempt to solve the concurrency exception issue, I use syncronized
OR ConcurrentHashMap
rather than HashMap
. Either way, I run into deadlock this way. Why might this be?
RPCServlet
? its subclass Command
? Here's how I run into these concurrency exceptions:
Here is what the asyncCmd looks like:
public HashMap asyncCmd(XmlRpcStruct struct) throws XmlRpcFault {
HashMap hash;
if (struct.containsKey("command")) {
ProcessManager proc = new ProcessManager(struct);
boolean validInput = proc.readInHashValues();
if (!validInput) {
throw new XmlRpcFault(400, proc.procResult.getTopError());
}
int id = proc.objectId;
instances.put(id, proc);
proc.start();
String a;
if ((a = proc.procResult.getTopError()) != null) {
throw new XmlRpcFault(500, a);
}
hash = new HashMap();
hash.put("id", id);
return hash;
} else {
throw new XmlRpcFault(400, "Unable to find a command");
}
}
KillAll:
public HashMap killAll() throws XmlRpcFault {
HashMap retHash = new HashMap();
int length = instances.values().size();
if (length > 0) {
Set keys = instances.keySet();
for (Object key : keys) {
retHash.put(key, killAllKiller(Integer.parseInt(key.toString())));
}
//Since we killed all procs, suggest a GC as well
instances.clear();
System.gc();
return retHash;
} else {
throw new XmlRpcFault(404, "No processes running");
}
}
Upvotes: 0
Views: 306
Reputation: 30448
Let me suggest another alternative
In this way, there will not be a contention between new requests and results.
I would put this map in the servlet context, which the natural place to keep application level variables.
Upvotes: 1