Nate
Nate

Reputation: 2326

Database taking up way more space than tables combined

I have a database that's taking up nearly 7 gigs. If I look at the table usage, it should be much less than that, like 40 megs. There was a large log table that I deleted yesterday, but my database still says it's very large.

Here are the stats:

database_name   database_size   unallocated space
Umbraco_Indoorpower 6911.56 MB  859.59 MB

reserved    data    index_size  unused
31144 KB    26272 KB    3240 KB 1632 KB

I ran this:

DBCC SHRINKDATABASE (umbraco_indoorpower, 99);

And that got my database down to 2.3 gigs. Still though, way too large.

database_name   database_size   unallocated space
Umbraco_Indoorpower 2302.44 MB  1.63 MB

reserved    data    index_size  unused
30016 KB    26200 KB    3240 KB 576 KB

I'm guessing I'm not freeing up all the space from that log table that I deleted yesterday. I actual ran delete from tblLog. Maybe that was the wrong way to go about it.

Does anyone know how I can free up some more space?

Upvotes: 3

Views: 3576

Answers (3)

Aaron Bertrand
Aaron Bertrand

Reputation: 280252

How big is the log file? What is your recovery model? It's quite possible that the database_size number above is nearly 7 GB of log and very little data. Find the files on your hard drive - you can locate the paths using:

EXEC umbraco_indoorpower..sp_helpfile;

I am going to bet that the LDF is HUGE and the MDF is actually small. In which case you are probably in FULL recovery model and have never taken a log backup. If this is true then you can do this:

USE umbraco_indoorpower;
GO
BACKUP LOG umbraco_indoorpower TO DISK = 'C:\some_path\umbraco.trn';
GO
DBCC SHRINKFILE(umbraco_indoorpower_log, 20); -- guessing on target MB size here

(If you are in simple recovery model, the above will fail, but there will be some other explanation why the log file is large - e.g. a long-running or uncommitted transaction, did your delete commit?)

Then you will want to either (a) set up proper maintenance, including full/diff/log backups, which will help make optimal reuse of the log file, or (b) switch to simple recovery, in which case the log will manage itself.

In most cases simple recovery does not provide enough protection in the event of a disaster, but that is for you to decide.

In the meantime, you can shrink the file all you want, but if you keep your recovery model and transaction handling the way it is, you'll just be seeing your file grow again and you'll be back tomorrow running the shrink command. This is absolutely horrible for your files. This is why I object to answers like "Run a shrink operation." I talk about why here:

Upvotes: 7

Frank Socha
Frank Socha

Reputation: 157

Another thing that can take up more space in SQL Server is Service Broker queues. In my case I have 6 million rows in queues taking up 17GB...

Upvotes: 0

usr
usr

Reputation: 171178

The existing answers are already pretty good. I have one additional solution: Script the database including data (the SSMS UI allows you to do this easily) and execute the script in a fresh database.

You maybe want to switch to simple log model, too (if you don't have a special need for using the full logging model). One thing is for sure: You can't run in full mode and not have proper transaction log management.

Upvotes: 1

Related Questions