Reputation: 711
I have a process that sends SQS messages every minute. It's important that the messages go out every minute so I'm planning on running the process on multiple instances so that it's more fault tolerant.
Even though it's running on multiple instances I only want the SQS messages to go out once per minute. So if Machine A dispatches the messages I don't want Machine B to send them and vise versa.
I want to avoid having a master/slave setup.
I thought of using a separate SQS queue to send a done message that could be received by one of the processes to start dispatching the messages and send a done message when complete / after a minute, but if the done message doesn't get sent because of a failure or other issue they cycle would end and that's not acceptable.
I also thought of having the process that dispatches the messages place a timestamp in simpleDB or possibly another DB and have the processes check the timestamp on an interval. The first one that checks it and finds that it's older than a minute would update the timestamp and dispatch the messages.
I investigated SWF and found that it can run workers/activities on a timer, but SWF seems like overkill for this and I'd rather avoid getting it setup and running with access to my DB.
Does anyone have an elegant solution for problems like this?
Upvotes: 0
Views: 62
Reputation: 1942
We used our mysql db to do this similar to what you suggested. But we don't try to read the timestamp (race condition?). The table has a unique index on the timestamp. The processes on each instance attempt to insert a timestamp of the minute they run, eg '2015-02-27 12:47:00'. If mysql returns a duplicate key error then another instance came first and they do nothing. If the insert was successful they send the SQS message.
You may also want to try google for distributed cron systems.
Upvotes: 1