Reputation: 5288
I have a database which logs modification records into a table. This modification table contains foreign keys to other tables (the modification table only contains references to objects modified). Objects in this modification table can be grouped into different populations. When a user access the service he only requests the database for object on his population. I will have about 2 to 10 new populations each week.
If I change this single table into many tables, I guess it will speed the response time.
But on the other hand, each "insert" in the modification table will require to have first the name of the target table (it implies another request). To do so, I plan to have a column in the "population" table with a varchar representing the target table for modification.
My question is a design-pattern / architecture one --> Should I go for a single very huge table with 3 "where" for each request, or should I give a try to many light table with no "where" to play?
Upvotes: 3
Views: 164
Reputation: 10215
500K - 1M records isn't trivial - but it certainly isn't huge either. What's your database platform? Most mainstream professional platforms (SQL, Oracle, MySQL, etc) are more than capable of handling this.
If the table in question is narrow (has few columns) then its less likely to be an issue than "wide tables" with lots of columns.
Having lots of joins could be an issue (i just can't speak from experience). Depending on how you manage things (and how good your application code is) do you really need the foreign-key constraints?
Upvotes: 0