Reputation: 311
I need to know if it is standard practice to decompose complex queries into parts and create temporary tables which are dropped at the end.? In OLAP applications it shouldnt be much of an issue, but in OLTP since speed matters is it avoided?.
Upvotes: 1
Views: 823
Reputation: 3025
IN oltp systems if the processing is part of the online system (i.e. not batch) then I can't recall ever using a temporary table. Using some sort of procedural logic is usually the way to go - e.g. PL/Sql in Oracle and so on.
In OLAP temporary tables are very common, usually load the data into a table, transform it and save the result in another table, and depending on the processing have a number of transform steps.
I'd go so far as to say if you have an oltp system and you need to use a temporary table, then something is incorrect, modify your design, or use procedural logic. In OLAP, temporary tables are very common.
hth
Upvotes: 0
Reputation: 46663
For simple queries which are well-optimized by your DBMS, temporary tables are usually a bad idea because they introduce overhead.
But sometimes your DBMS will have a really hard time optimizing complex queries. At that point you have at least 5 options:
There's no hard-and-fast rule about which option is best for any particular query. I've used all of the above strategies. I tend to choose the temp table approach when I don't own the schema, so I can't change it, and when I don't want to depend on hints or query tuning or saved plans (often because I don't want to expose myself to changes in the underlying schema made later).
Keep in mind that using temp tables to decompose queries will give you sub-optimal performance every time. But it's usually predictably sub-optimal. The worst case using temp tables isn't nearly as bad as when your DBMS chooses a bad plan for a single large query. This happens surprisingly often, especially in the face of changes in underlying schema, DBMS version changes, dev vs. production differences, etc.
Personally, I find that if a query gets to a level of complexity where I have to bend over backwards to get the DBMS to do what I want, and if I feel that maintainability of the application is at risk, then I'll often go with decomposition and temp tables if I can't change the schema or indexes.
Of course, in theory you shouldn't be running expensive, complex queries on your OLTP database, but in practice most applications are never "pure" OLTP-- there's always a few complicated, hard-to-optimize queries in any OLTP project.
Upvotes: 1
Reputation: 37645
The critical word in your question is "decompose". Temp tables and other strategies are generally discouraged and found to lead to lower overall performance. The optimizer is perfectly capable of using intermediate tables if they are useful for getting to the answer most quickly. Very rarely can you help the optimizer by coercing it with your own strategy.
The same thing goes for suggesting which indexes to use.
When you see this going on, almost always some one has more work to do refining their query statements.
Upvotes: 1
Reputation: 2772
The only time I've used temp tables during OLTP processing is when I am dealing with a batch of data that I need to analyze/join, and eventually do a data change operation on it (Insert/Update/Delete). I'll use temp tables for a) speed but more importantly, b) because the normal select/update or select/delete logic is either too complex or can't be done in one transactional statement.
For example, find 100k users who meet some condition, and insert them into an archive table and then delete them.
I don't recommend using temp tables in most cases for normal select statements. You can almost always get better performance with either proper indexing, better sql join/hints and/or changing the data structure to match data access paths.
Upvotes: 0