Reputation: 347
I have this use case:
3 Tables: Customers
, Users
, Contacts
.
Each customer might have around 10.000 Users.
Each user will have around 15.000 contacts. I want to store, for each user, their 15.000 contacts in the Contacts table. However, this, with only one customer, sets me at 150 million rows. What if I have 100 customers?
I know Partitioning tables might help, but is there a better option? What would be the best solution to store this amount of data, and for it to be usable afterwards (Select, Insert queries to be optimized).
Thank you!
Upvotes: 0
Views: 542
Reputation: 108641
The numbers of rows you mention should not require any extraordinary table design.
To keep performance good, avoid large-object column data types like TEXT, LONGTEXT unless you absolutely need them. Instead use NVARCHAR(255) or whatever makes most sense in your application.
Your tables will need the correct indexes to get best performance. To find those indexes, once you have populated your tables with at least some data, you can use SSMS: Before you run the query in SSMS, right-click in the query panel and choose Show Actual Execution Plan. Then run the query. SSMS will sometimes suggest an index for the query at the top of the execution plan.
It's impossible to guess correct indexes without knowing what your most frequently used queries are. One thing is sure: adding single-column indexes on lots of columns is probably not helpful.
As SQL applications -- like yours -- grow and add users the tables grow. And, it is very common for the tables to need new indexes as they grow. It's completely normal to add or change indexes to applicatios that are in production. Revisit your indexing at 100K rows, then again at a million, ten million, and so forth.
Upvotes: 1