volume one
volume one

Reputation: 7581

What is the best way to share database data between multiple sites?

I am running multiple websites and have noticed that many of the tables they rely on are standard stuff that ideally should be centralised:

When one updates, I am having to update it in all the databases for each of the sites which is getting tedious and causing oversights. I originally designed it this way because I thought if I made a shared database it would be a single point of failure. If the shared database was to ever go down, then all my sites would suffer.

Is there a better way of doing this or do I have just have to accept the risk of single-point-of-failure and put it robust disaster recovery procedures to mitigate it?

I am using SQL Server 2014 Enterprise.

Upvotes: 2

Views: 2600

Answers (3)

shankarsh15
shankarsh15

Reputation: 1967

Why not create Database as a service, meaning not exposing Database directly to Client but exposing it via a REST Interface.

In this way you can handle fault tolerance requirements of an application in a better way and then also have better recovery mechanisms on top of it.

Hope this helps.

Upvotes: 1

Bob Salmon
Bob Salmon

Reputation: 431

It comes down to balancing risks, costs and benefits - which of your problems is more likely to happen, and how bad would things be if it did?

How often does an update to the reference data not get applied to everywhere? How bad is it when this happens? Is it as bad if an update will get applied everywhere, but only eventually rather than immediately? And so on.

Compare that to: How often do databases crash? How bad is it if that happens? And so on.

How important is it that the system is simple? Is it already hard to manage? If so, is it the code or the database that's the harder bit?

There are several options I can think of; which is best will depend on your circumstances. The circumstances include what technology you already have skills in, what your technology strategy is etc.

  • You could keep things as they are.
  • You could change to a single large shared db, with appropriate availability safeguards. This means the sites will always be completely in sync, but back-ups and upgrades will be bigger and harder, and the sites are tied together (which might not fit so well with your organisation structure).
  • If you have a single system of record for the reference data, whatever updates that could be changed to also apply those updates to the other databases. This doesn't scale so well, it's not transactionally secure (so a given site might miss an update if the update code crashes) but keeps the database side of things unchanged and might be good enough.
  • You could split each database into 2: 1 part for the reference data (that's the same for each site) and 1 part for everything else (that's different for each site). You can set up replication from the system of record for the reference data out to the other instances, and then change the client code to read both parts of the database.
  • If you already have some kind of shared infrastructure between the different sites e.g. a shared message queue as in micro-services, the thing that updates the system of record could also put a message onto the queue, which gets read by each website and leads to it updating its own database. (This would be eventual consistency rather than immediate, but that might be good enough.)

Upvotes: 1

Bavishi PN
Bavishi PN

Reputation: 389

1] If your all 04 website is having static data, then you should go for single database.Here database DML(Data Manipulation Language) operation should be minimum and not often.Though website performance will be little low, but it is trade-off between performance v/s database maintenance.

2] If your all 04 website is having OLTP(online transaction processing) like e-Commerce application, then you should keep all four database separate.

Keep disaster recovery for any of above case you choose

Upvotes: 1

Related Questions