Syska
Syska

Reputation: 1149

Azure Traffic Manager for Cloud Services - What about storage access?

I have finally got the time to start looking at Azure. It's looks good and easy scaling.

Azure SQL, Table Storage and Blog Storage should cover most of my things. Fast access to data, auto replication and failover to an other datacenter.

Should the idea come for an app that needs fast global access the Traffic manager is there and one can route users for "Fail Over" or "Performance".

The "performance" is very nice for Cloud Services and "Web Roles / Worker Roles" ... BUT ... What about access to data from SQL Azure/Table Storage/Blog Storage.

I have tried searching the web(for what to do about this need), but haven't found anything about the traffic manager that mentions anything about how to access data in such a scenario.

Have I missed anything?

Do people access the storage in the original data center (and if that fails use the Geo Replication feature)? Is that fast enough? Is internal traffic on the MS network free across datacenters?

This seems like such a simple ...

Upvotes: 1

Views: 727

Answers (3)

ramseyjacob
ramseyjacob

Reputation: 282

In this episode of Channel 9 they state that Traffic Manager is only for Cloud Services as of now (Jan 2014) but support is coming for Azure Web Sites and other services. I agree that you should be able to ask for a Blob using a single global URL and expect that the content will be served from the closest datacenter.

Upvotes: 1

Sandrino Di Mattia
Sandrino Di Mattia

Reputation: 24895

Take a look at the guidance by Microsoft: Replicating, Distributing, and Synchronizing Data. You could use the Service Bus to keep data centers in Sync. This can cover SQL Databases, Storage, search indexes like SolR, ElasticSearch, ... The advantage over solutions like SQL Data Sync is that it's technology independent and it can keep virtually all your data in sync:

enter image description here

Upvotes: 1

kwill
kwill

Reputation: 11008

There isn't a one-click easy to implement solution for this issue. The way you solve it will depend on where the data lives (ie. SQL Azure, Blob storage, etc) and your access patterns.

  • Do you have a small number of data requests that are not on a performance critical path in your code? Consider just using the main datacenter.
  • Do you have a large number of read-only type of requests? Consider doing a replication of the data to another datacenter.
  • Do you do a large number of read and only a few write operations? Consider duplicating the data among all datacenters and each write will write to all datacenters at the same time (incurring a perf penalty) and do all reads to the local datacenter (fast reads).
  • Is your data in SQL Azure? Consider using SQL Data Sync to keep multiple datacenters in sync.

Upvotes: 0

Related Questions