Reputation: 843
I'm hoping someone has some insight to offer here. I'm in an environment where we have a central database server with a database of around 20GB and individual database servers across about 200 facilities. The intention is to run a copy of our application at each facility pointing at their local server, but to sync all databases in both directions as often as possible (no more than 10,000 rows affected per day, individual rows on average 1.5kb). Due to varying connectivity, a facility could be offline for a week or two at times and it needs to catch up once back online.
Question: Using pull replication with the merge strategy, are there practical limits that would affect our environment? At 50, 100, 200 facilities, what negative effects can we expect to see, if any? What kind of bandwidth expectations should we have for the central server (I'm finding very little about this number anywhere I look)?
I appreciate any thoughts or guidance you may have.
Upvotes: 0
Views: 84
Reputation: 6734
Based on your description, the math looks like this:
1.5 kb (per row) * 10000 rows = 15 GB per day (min) incoming, at every one of your 50 to 200 sites.
15 GB * (50 to 200 sites) = .7 to 3 TB per day (min), sent from your central server.
Your sites will be fairly busy (15 GB per day) and your hub will be very busy (3 TB per day)
So bandwidth might be a concern. You will definitely want to monitor your bandwidth and throughput. Negative side effects would be periodic slowness at your hub (every synch).
Upvotes: 1