Reputation: 4081
I have an application that periodically calls some service for data (using torquebox schedulers) and when set of data is available it should process each "data record" separately.
I'd like to process those records concurrently for better performance and my first thought was to set-up jms queue (available in torquebox ouf of the box) so that scheduled job would put all the received data in queue and each record would be picked up (for one of multiple receivers connected) for processing.
But isn't it overengineering things to puth JMS queue between elements of the same application? Any other approaches you could suggest here?
Upvotes: 0
Views: 816
Reputation: 71
How about using Java messaging queue (as you mentioned) since HornetQ is part of the JBoss/Torquebox and then using a message processor for the handling of the messages. You can also specify the level of concurrency at the torquebox.rb (or .yml).
Your_Scheduled_Job -> /queues/my_queue -> TorqueBox::Messaging::MessageProcessor
In your config/torquebox.rb file you can specify the concurrency and name messaging processor:
queue '/queues/my_queue' do
processor MyMessageProcessor do
concurrency 5
end
end
The messaging processor will process the messages on the queue concurrently without needing any other steps.
I'm also still experimenting with Torquebox and Ruby concurrency and this something I'm trying to implement these days...
Upvotes: 0
Reputation: 132862
A JMS queue may not be a bad solution at all, try it out and see how it works for you. When they are as easy to use as in Torquebox it doesn't need to be overengineering.
If you want something less involved I recommend using Java's own BlockingQueues, either LinkedBlockingQueue or ArrayBlockingQueue depending on the exact use case.
These are just regular collections like arrays or hashes, so you'll need to create them somewhere and pass them into the components that you want to be able to publish and consume from them. They also do not have any concept of acknowledgement like JMS queues have.
Upvotes: 1