Reputation: 36726
I have a web-crawler where the basic layout is a manager that runs agents that open connection, fetch the content and insert it in a database.
Each one of these agents run in a separate thread in a loop until the user send a stop signal. These agents get the tasks from the manager agent.
The problem is if an exception occurs in a agent, it can't be throwed to the interface (unless I use some observer to signal that an exception occurred).
I'm thinking that this design is wrong and the correct is create a finite task and putting they in the Executor (create a task for each URL to open connection, fetching content or insert into database).
I'm right that my current design is wrong and must change the layout? What is the right layout to use with multi threading where different agents do different parts of the job?
Upvotes: 0
Views: 327
Reputation: 116908
Yes, I think you should be using the Executors
. If you submit Callable
classes, you can throw from your spider agents and examine the returned Future
which causes the exception to be thrown to the job submitter so it can be logged or displayed to your UI.
ExecutorService pool = Executors.newFixedThreadPool(10);
Future<Void> future = pool.submit(new Callable<Void>() {
public Void call() throws Exception {
// your agent code goes here
// this can throw an exception in this block which will be re-thrown below
return null;
}
});
...
try {
// then the exception will be thrown here for you to catch and log
future.get();
} catch (ExecutionException e) {
// log the e.getCause() here
}
...
pool.shutdown();
Upvotes: 4