Reputation: 619
I have an application which will be deployed on Azure.
If we don't care about the cost of the hosting (which will be more dependent on CPU and IO than response time), what would be the best strategy to optimize queries for high frequency utilization (queries which will be executed many times per second ) ?:
Of course all those optimization scenarios have one goal = execute the maximum queries per second.
What would be the most important strategy and why ?
Upvotes: 0
Views: 510
Reputation: 3586
Reduce IO. Azure has shockingly low IOPS even with Premium Storage. I guess that's the price of geo-mirroring. Keep all data for high-frequency queries in tempdb on SSD Temporary Storage on SQL Server VM. Alternatively try utilizing Memory-Optimized Tables.
Extra CPU can be expensive, but it is available.
Response time across VMs can be reduced if they are on the same Affinity Group. For the client response time pick the nearest Region as the ping flies.
Upvotes: 1
Reputation: 9473
That's very hard to answer as there's a lot of dependencies.
- data structure (tables, indexes, variable types)
- amount of data sent down to client (1 row with 1kb of data or just ONE row with 500kB of data)
- are tigers used
- writes vs reads 5 file groups for data.
So at this sage we can only imagine how to deal with that....
Upvotes: 0