simonmorley
simonmorley

Reputation: 2804

RoR Watching for changes in a frequently updated table

I'm looking for some advice about monitoring a specific table which gets about 250,000 inserts per day. Not massive but it's growing quickly and we cannot continue working it as we are.

The main application relies on RoR4 with a MySQL database. The data's streamed from a number of radius servers straight into the database.

We need to monitor a specific column for a particular value and then perform a simple look up:

if column == xxx
  venue = Venue.find_by(bar: foo)
  if venue.is_monitored?
    do some other shizzle
  end
end

The location table isn't of a trivial size either, so I don't want to look through hundreds of thousands of rows each time something happens.

Can someone help recommend a fast and efficient way to do this? Currently checking each insert seems ridiculous in Rails. We have also tried processing the job in the background every 60 seconds but that seems antiquated.

So far, the best solution was to watch for the value on insert, publish it to Redis and use a node server to listen for the event. And then shove it back to Rails.

Sorry if this is subjective or off the point, I'd really like someone to teach me a tasty way to do this.

Upvotes: 1

Views: 479

Answers (1)

Benjamin Bouchet
Benjamin Bouchet

Reputation: 13181

What about transferring the monitoring to MySQL?

You could use a TRIGGER that run on UPDATE and INSERT of the table you are monitoring. This trigger would perform the verifications and if the conditions are met it can: (multiple possibilities)

  • the pull way: insert the ID of the record that was updated/created into a temporary table, then once in a while your Rails app could fetch the IDs from this temporary table, empty it, and then perform the needed actions on those specific records.
  • the push way: the trigger use MySQL's sys_exec() to call one of your ruby/rails scripts that perform the needed actions.

Note that either way you could fall into a case where the record got modified just after the trigger execution and just before your Rails app execution, so you should have your Rails app to also verify that all conditions are met on the records received.

Edit

You have to plan carefully. Push may look attractive because it save resources and the action is immediate, but the MySQL trigger will "shoot and forget". So what if somehow sys_exec() cannot reach your Rail script? (system crash, whatever), so records may not be received and never seen by your Rails app.

Pull is safer, because the IDs of the concerned records are kept in the temporary table, and if something append Rails will still be able to find them later. But action is delayed by few seconds (which is not always a acceptable situation) and it's more resource hungry.

Choice is yours and depend on your requirements.

Also, if you like complicate solution or simply if you need security of data and immediate action at the same time, you could implement a mix of both solution:

  1. Trigger save the ID of the record in the temporary table, and call your script through sys_exec().
  2. through the sys_exec call, Rails perform the needed actions, and delete the corresponding record from the temporary table because the action is done and we don't need it anymore.
  3. Once in a while, Rails check the temporary table to be sure no record was missed, and if there is records left in the temporary table you still can perform your actions on it.

This covers everything: consume less resources, immediate response, and recover unpredictable situation. But it's more difficult to implement

--

Note: I use the "temporary table" term, but I don't mean "temporary" in the SQL vocabulary like a memory table or so, it's better to have a real disk-based table to protect it against system crash.

Upvotes: 1

Related Questions