Slim Tekaya
Slim Tekaya

Reputation: 347

offline web application design recommendation

I want to know which is the best architecture to adopt for this case :

  1. I have many shops that connect to a web application developed using Ruby on Rails.
  2. internet is not reachable all the time
  3. The solution was to develop an offline system which requires installing a local copy of the distant database.

All this wad already developed. Now what I want to do :

To resolve this problem I thought about using a JMS like software eventually Rabbit MQ. This consists on pushing any sql request into a JMS queue that will be executed on the distant instance of the application which will insert into the distant DB and push the insert or SQL statement into another queue that will be read by all the local instances. This seems complicated and should slow down the application.

Is there a design or recommendation that I must apply to resolve this kind of problem ?

Upvotes: 6

Views: 853

Answers (2)

Gregory Mostizky
Gregory Mostizky

Reputation: 7261

You can do that but essentially you are developing your own replication engine. Those things can be a bit tricky to get right (what happens if m1 and m3 are executed on replica r1, but m2 isn't?) I wouldn't want to develop something like that unless you are sure you have the resources to make it work.

I would look into existing off-the shelf replication solution. If you are already using a SQL DB it probably has some support for it. Look here for more details if you are using MySQL

Alternatively, if you are willing to explore other backends, I heard that CouchDB has great support for replication. I also heard of people using git libraries to do that sort of thing.

Update: After your comment, I realize you already use MySql replication and are looking for solution for re-syncing the databases after being offline.

Even in that case RabbitMQ doesn't help you at all since it requires constant connection to work, so you are back to square one. Easiest solution would be to just write all the changes (SQL commands) into a text file at a remote location, then when you get connection back copy that file (scp, ftp, emaill or whatever) to master server, run all the commands there and then just resync all the replicas.

Depending on your specific project you may also need to make sure there are no conflicts when running commands from different remote location but there is no general technical solution to this. Again, depending on the project, you may want to cancel one of the transactions, notify the users that it happened and so on.

Upvotes: 2

Pan Thomakos
Pan Thomakos

Reputation: 34350

I would recommend taking a look at CouchDB. It's a non-SQL database that does exactly what you are describing automatically. It's used especially in phone applications that often don't have internet or data connectivity. The idea is that you have a local copy of a CouchDB database and one or more remote CouchDB databases. The CouchDB server then takes care of teh replication of the distributed systems and you always work off your local database. This approach is nice because you don't have to build your own distributed replication engine. For more details I would take a look at the 'Distributed Updates and Replication' section of their documentation.

Upvotes: 1

Related Questions