Reputation: 6940
I update my big table incrementally. My big table is a result of a join of many tables. There are so many joins of so tables in my query that it became maze-like and I stumbled on a problem while adding another join to get another column to the table.
Do you recommend to make the query even more complex by adding another join - all this for making then one insert to the table. Or do you recommend to split the query into two queries - one insert
and then update
. In a second query just update the column based on a query.
I could imagine that one insert of all columns could be faster then small insert of few columns accompanied by update of the rest columns. But the number of tables in a join has its limits at least in the terms of clearness. What are the best practices here.
Upvotes: 1
Views: 300
Reputation: 2989
Have you heard of the MERGE statement? Handles inserts/updates/deletes in the same query. Aaron Bertrand has an article talking about the pitfalls of it but overall, it helps simplify when you have to do any of those operations: http://www.mssqltips.com/sqlservertip/3074/use-caution-with-sql-servers-merge-statement/
As for your question, maintenance-wise and index tuning wise, it would be much more advantageous to break those two operations out. Adding another join to a table does not make a query complex. That is how an RDBMS is supposed to function. Well commented code and concise naming standards should make any joins clear to someone reading the query.
If adding another join complicates the update/insert process, then yes, break them out. If it is just a matter of deciphering what the query is doing but the operation is fine, give it some context with comments.
Asking for best practices on something like this can tend to have people say 'it depends'. It is kind of a broad subject and can be handled multiple ways, depending on what your situation is like. Someone with a small row count that isn't expecting it to get large could get away with a somewhat expensive operation. Someone with a large row count (read millions-billions) would most likely be looking for a new job if they brought the server to its knees all the time.
Upvotes: 1