Reputation: 18137
How many records are considered normal for a typical SQL sever database table? I mean, if some of the tables in database contain something like three or four million records, should I consider replacing the hardware, partitioning tables, etc? I have got a query which joins only two tables and has four conditions in its WHERE clause with an ORDER By. This query usually takes 3-4 secs to execute, but once in every 10 or 20 executions, it may take even longer (10 or 20 secs) to execute (I don't think this is related to parameter sniffing because I am recompiling the query every time). How can I improve my query to execute in less than a second? How can I know how it can be achieved? How can I know whether increasing the amount of RAM or Adding news Hard Drive or Increasing CUP Speed or even Improving indexes on tables would boost the performance? Any advice on this would be appreciated :)
Upvotes: 1
Views: 479
Reputation: 625077
4 million records is not a lot. Even Microsoft Access might manage that.
Even 3-4 seconds for a query is a long time. 95% of the time when you come across performance issues like this it comes down to:
None of which has anything to do with hardware.
Unless the records are enormous or the throughput is extremely high then hardware is unlikely to be the cause or solution to your problem.
Upvotes: 1
Reputation: 115751
Unless you're doing some heavy-weight join
s, 3-4 million rows do not require any extraordinal hardware. I'd first investigated if there are appropriate indexes, if they are used correctly, etc.
Upvotes: 0