johnny
johnny

Reputation: 1251

number of rows in big table

SELECT COUNT(*) FROM BigTable_1

Which way I should to use to get number of rows in table if I have more than 1 billion of rows?

UPDATE: For example, if we have 'a timeout problem' with the query above is there any way to optimize it? How to do it quicker?

Upvotes: 2

Views: 2606

Answers (8)

Raj More
Raj More

Reputation: 48016

If you need an exact count, you have to use COUNT (*)

If you are OK with a rough count, you can use a sum of rows in the partitions

SELECT SUM (Rows)
FROM sys.partitions
WHERE 1=1
And index_id IN (0, 1)
And OBJECT_ID = OBJECT_ID('Database.schema.Table');

If you want to be funny with your COUNT, you can do the following

select COUNT (1/0) from BigTable_1

Upvotes: 5

datagod
datagod

Reputation: 1051

A very fast ESTIMATE:

select count(*) from table

But don't execute! Highlight the code, press ctl-l to bring up the query plan. Then hover over the leftmost arrow. A yellow box appears with the estimated number of rows.

You can query system tables to get the same data, but that is harder to remember. This way is much more impressive to onlookers.

:)

Upvotes: 4

Andrew Lazarus
Andrew Lazarus

Reputation: 19302

Depending on your concurrency, speed, and accuracy requirements, you can get an approximate answer with triggers. Create a table

CREATE TABLE TABLE_COUNTS(TABLE_NAME VARCHAR, R_COUNT BIGINT DEFAULT 0); 
INSERT INTO TABLE_COUNTS('BigTable_1', 0); 

(I'm going to leave out adding a key, etc., for brevity.)

Now set up triggers.

CREATE TRIGGER bt1count_1 AFTER INSERT ON BigTable_1 FOR EACH ROW 
BEGIN 
UPDATE TABLE_COUNTS SET R_COUNT=R_COUNT+1 WHERE TABLE_NAME='BigTable_1';
END;

A corresponding decrement trigger goes on DELETEs. Now instead of a COUNT, you query the TABLE_COUNT table. Your result will be a little off in the case of pending transactions, but you may be able to live with that. And the cost is amortized over all of the INSERT and DELETE operations; getting the row count when you need it is fast.

Upvotes: 2

Mikael Eriksson
Mikael Eriksson

Reputation: 138960

You can use sys.dm_db_partition_stats.

select sum(row_count)
from sys.dm_db_partition_stats
where object_id = object_id('TableName') and  index_id < 2

Upvotes: 2

DForck42
DForck42

Reputation: 20327

if you've got a primary key you should be able to do this:

select count(PrimaryKey) from table_1

Upvotes: -2

gcores
gcores

Reputation: 12656

Try this:

select sum(P.rows) from sys.partitions P with (nolock)      
join sys.tables T with (nolock) on P.object_id = T.object_id        
where T.Name = 'Table_1' and index_id = 1

it should be a lot faster. Got it from here: SELECT COUNT(*) FOR BIG TABLE

Upvotes: 1

Nicholas Carey
Nicholas Carey

Reputation: 74197

There's only 1 [accurate] way to count the rows in a table: count(*). sp_spaceused or looking at the statistics won't necessarily give you the [a?] correct answer.

Upvotes: 0

contactmatt
contactmatt

Reputation: 18600

Your query will get the number of rows regardless of the quantity. Try using the query you listed in your question.

Upvotes: 0

Related Questions