Reputation: 31
I have a multi-tier web app for managing patients' folders. Each folder has a number of modules. What I want to do is to keep track of the changes on every module, for every patient, for each user.
So I'm about to design a db table for this, and i estimate that it will escalate up to 15 million rows. The table it will consist of a primary key (int) which it'll be the id, three foreign keys with the id's of the modules, patients' folders, and users (all int), and one simple field with 0/1 values.
Then the logged in user, while navigating in the app, will be frequently informed about any updates with some color code on the folders and the modules. So I will be querying (with simple select queries) the db frequently and i want to be pretty fast in order to keep the user experience to a standard (response time~1sec and lower).
Do i need to have any concerns about the performance? Im using MS SQL Server 2008 R2 on a quad core XEON with 4gb RAM.
Thanks in advance:)
Upvotes: 3
Views: 240
Reputation: 8860
The scan of 15M row table usually don't take more than a second anyway. So if your question is about single independent request, I think you will not have problems in such conditions.
But for future needs and better performance you might consider adding non-clustered indexes on the columns "fk_user_id", "fk_mod_id", "fk_folder_id". I suppose it will significantly increase the select statement performance (in the cost of decreasing performance of updates).
Upvotes: 1