kev
kev

Reputation: 145

Is there downside to creating and dropping too many tables on SQL Server

I would like to know if there is an inherent flaw with the following way of using a database... I want to create a reporting system with a web front end, whereby I query a database for the relevant data, and send the results of the query to a new data table using "SELECT INTO". Then the program would make a query from that table to show a "page" of the report. This has the advantage that if there is a lot of data, this can be presented a little at a time to the user as pages. The same data table can be accessed over and over while the user requests different pages of the report. When the web session ends, the tables can be dropped.

I am prepared to program around issues such as tracking the tables and ensuring they are dropped when needed.

I have a vague concern that over a long period of time, the database itself might have some form of maintenance problems, due to having created and dropped so many tables over time. Even day by day, lets say perhaps 1000 such tables are created and dropped.

Does anyone see any cause for concern?

Thanks for any suggestions/concerns.

Upvotes: 5

Views: 1372

Answers (3)

Paul Sasik
Paul Sasik

Reputation: 81429

Before you start implementing your solution consider using SSAS or simply SQL Server with a good model and properly indexed tables. SQL Server, IIS and the OS all perform caching operations that will be hard to beat.

The cause for concern is that you're trying to write code that will try and outperform SQL Server and IIS... This is a classic example of premature optimization. Thousands and thousands of programmer hours have been spent on making sure that SQL Server and IIS are as fast and efficient as possible and it's not likely that your strategy will get better performance.

Upvotes: 3

Paolo Falabella
Paolo Falabella

Reputation: 25844

First of all: +1 to @Paul Sasik's answer.

Now, to answer your question (if you still want to go with your approach). Possible causes of concern if you use VARBINARY(MAX) columns (from the MSDN)

If you drop a table that contains a VARBINARY(MAX) column with the FILESTREAM attribute, any data stored in the file system will not be removed.

If you do decide to go with your approach, I would use global temporary tables. They should get DROPped automatically when there are no more connections using them, but you can still DROP them explicitly. In your query you can check if they exist or not and create them if they don't exist (any longer).

IF OBJECT_ID('mydb..##temp') IS NULL
-- create temp table and perform your query

this way, you have most of the logic to perform your queries and manage the temporary tables together, which should make it more maintainable. Plus they're built to be created and dropped, so it's quite safe to think SQL Server would not be impacted in any way by creating and dropping a lot of them.

Upvotes: 1

Florin Ghita
Florin Ghita

Reputation: 17643

1000 per day should not be a concern if you talk about small tables.

I don't know sql-server, but in Oracle you have the concept of temporary table(small article and another) . The data inserted in this type of table is available only on the current session. when the session ends, the data "disapear". In this case you don't need to drop anything. Every user insert in the same table, and his data is not visible to others. Advantage: less maintenance. You may check if you have something simmilar in sql-server.

Upvotes: 1

Related Questions