edgarmtze
edgarmtze

Reputation: 25058

How to measure table size in GB in a table in SQL

In a previous question @Morawski was saying that "a table with 1,000 columns and 44,000 rows It's about 330 MB; that's how much a browser uses for just a few open tabs".

How many columns and rows the table should have to tell its size is > 10 GB (suposing the table has only double values).

How did @Morawski concluded that 1,000 columns and 44,000 is 330MB?

Is there any script that could tell this in SQL?

Upvotes: 12

Views: 50215

Answers (5)

Regis St-Gelais
Regis St-Gelais

Reputation: 3251

To find the size of all tables in the database, you can use the undocumented stored procedure sp_MSforeachtable.

From SQL Shack:

There have always been some undocumented objects in SQL Server that are used internally by Microsoft, but they can be used by anybody that have access to it. One of those objects is a stored procedure called sp_MSforeachtable.

sp_MSforeachtable is a stored procedure that is mostly used to apply a T-SQL command to every table, iteratively, that exists in the current database.

Here's how you would use it:

sp_MSforeachtable 'exec sp_spaceused [?]'

Upvotes: 2

Alexander V. Yudakov
Alexander V. Yudakov

Reputation: 136

-- Measures tables size (in kilobytes)
-- Tested in MS SQL Server 2008 R2

declare @t table (
name nvarchar(100), [rows] int, [reserved] nvarchar(100), [data] nvarchar(100), [index_size] nvarchar(100), [unused] nvarchar(100)
)
declare @name nvarchar(100)

declare tt cursor for
Select name from sys.tables
open tt

fetch next from tt into @name
while @@FETCH_STATUS = 0
begin
  insert into @t
  exec sp_spaceused @name
  fetch next from tt into @name
end

close tt
deallocate tt

select name as table_name, [rows] as rows_count, data + [index] as total_size, data as data_size, [index] as index_size
from (select name,
[rows],
cast (LEFT(data, LEN(data)-3) as int) data,
cast (LEFT(index_size, LEN(index_size)-3) as int) [index]
 from @t
) x
order by 3 desc, 1

Upvotes: 12

Remus Rusanu
Remus Rusanu

Reputation: 294387

There are precise formulas to do capacity planning for SQL Server:

With 1000 columns of fixed length doubles (that would be the float(53) SQL type, 8 bytes of storage) your row approaches the max row size limit, but it actually fits in page. 44k rows require 44k pages (due to the huge row size, only one row per page would fit), that is, at 8kb a page 44000*8kb = ~344 Mb. If you have a clustered index size would increase depending on the key size, see the first link above.

But a table design of 1000 columns is a huge code smell. Your question is very vague about the database part, your previous question never mentions a database and is about in memory arrays, when added together these two questions just don't make much sense.

Perhaps you are interested in reading about Sparse Columns, about EAV modeling or about XML data type.

Upvotes: 3

Chains
Chains

Reputation: 13157

Not sure about the TSQL script (I'm sure it exists), but you can find it through the UI (SSMS) as follows:

1) R-click the table
2) ...Properties
3) ...Storage tab

From there, it will tell you both the "data space" and the "index space" -- so if you want a total footprint, just add those up.

EDIT
Consider also log space if you're looking for a total footprint for the table.

Here is info on the stored procedure listed in @jon's answer. Also, it references the sys views where you can query the space usage data directly. http://msdn.microsoft.com/en-us/library/ms188776.aspx

Upvotes: 3

Jon Egerton
Jon Egerton

Reputation: 41569

There is a sproc call sp_spaceused. Don't know if this is what @Morawski used but as an example on a dev db I had handy:

exec sp_spaceused 'aspnet_users'

gives

name          rows    reserved     data      index_size     unused
------------- ------- ------------ --------  ------------   ---------- 
aspnet_Users  3       48 KB        8 KB      40 KB          0 KB

Upvotes: 17

Related Questions