Reputation: 11553
Which way to count a number of rows should be faster in MySQL?
This:
SELECT COUNT(*) FROM ... WHERE ...
Or, the alternative:
SELECT 1 FROM ... WHERE ...
// and then count the results with a built-in function, e.g. in PHP mysql_num_rows()
One would think that the first method should be faster, as this is clearly database territory and the database engine should be faster than anybody else when determining things like this internally.
Upvotes: 151
Views: 260997
Reputation: 17752
When you COUNT(*)
it takes in count column indexes, so it will be the best result. MySQL with MyISAM engine actually stores row count, it doesn't count all rows each time you try to count all rows. (based on primary key's column)
Using PHP to count rows is not very smart, because you have to send data from MySQL to PHP. Why do it when you can achieve the same on the MySQL side?
If the COUNT(*)
is slow, you should run EXPLAIN
on the query, and check if indexes are really used, and where they should be added.
The following is not the fastest way, but there is a case, where COUNT(*)
doesn't really fit - when you start grouping results, you can run into a problem where COUNT
doesn't really count all rows.
The solution is SQL_CALC_FOUND_ROWS
. This is usually used when you are selecting rows but still need to know the total row count (for example, for paging).
When you select data rows, just append the SQL_CALC_FOUND_ROWS
keyword after SELECT:
SELECT SQL_CALC_FOUND_ROWS [needed fields or *] FROM table LIMIT 20 OFFSET 0;
After you have selected needed rows, you can get the count with this single query:
SELECT FOUND_ROWS();
FOUND_ROWS()
has to be called immediately after the data selecting query.
In conclusion, everything actually comes down to how many entries you have and what is in the WHERE statement. You should really pay attention on how indexes are being used, when there are lots of rows (tens of thousands, millions, and up).
Upvotes: 147
Reputation: 3157
If you don't need super-exact count, then you can set a lower transaction isolation level for the current session. Do it like this:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
SELECT count(*) FROM the_table WHERE ...;
COMMIT; /* close the transaction */
It is also good to have an index that matches the WHERE
condition.
It really speeds up the counting for big InnoDB tables. I checked it on a table with ~700M rows and heavy load, it works. It reduced the query time from ~451 seconds to ~2 seconds.
I took the idea from this answer: https://stackoverflow.com/a/918092/1743367
Upvotes: 2
Reputation: 1971
This is the best query able to get the fastest results.
SELECT SQL_CALC_FOUND_ROWS 1 FROM `orders`;
SELECT FOUND_ROWS();
In my benchmark test: 0.448s
This query takes 4.835s
SELECT SQL_CALC_FOUND_ROWS * FROM `orders`;
SELECT FOUND_ROWS();
count * takes 25.675s
SELECT count(*) FROM `orders`;
Upvotes: 4
Reputation: 25200
This query (which is similar to what bayuah posted) shows a nice summary of all tables count inside a database: (simplified version of stored procedure by Ivan Cachicatari which I highly recommend).
SELECT TABLE_NAME AS 'Table Name', TABLE_ROWS AS 'Rows' FROM information_schema.TABLES WHERE TABLES.TABLE_SCHEMA = '`YOURDBNAME`' AND TABLES.TABLE_TYPE = 'BASE TABLE';
Example:
+-----------------+---------+
| Table Name | Rows |
+-----------------+---------+
| some_table | 10278 |
| other_table | 995 |
Upvotes: 35
Reputation: 59
A count(*) statement with a where condition on the primary key returned the row count much faster for me avoiding full table scan.
SELECT COUNT(*) FROM ... WHERE <PRIMARY_KEY> IS NOT NULL;
This was much faster for me than
SELECT COUNT(*) FROM ...
Upvotes: 0
Reputation: 1266
EXPLAIN SELECT id FROM ....
did the trick for me. and I could see the number of rows under rows
column of the result.
Upvotes: 6
Reputation: 111
I handled tables for the German Government with sometimes 60 million records.
And we needed to know many times the total rows.
So we database programmers decided that in every table is record one always the record in which the total record numbers is stored. We updated this number, depending on INSERT or DELETE rows.
We tried all other ways. This is by far the fastest way.
Upvotes: 0
Reputation: 111
I did some benchmarks to compare the execution time of COUNT(*)
vs COUNT(id)
(id is the primary key of the table - indexed).
Number of trials: 10 * 1000 queries
Results:
COUNT(*)
is faster 7%
VIEW GRAPH: benchmarkgraph
My advice is to use: SELECT COUNT(*) FROM table
Upvotes: 7
Reputation: 261
Try this:
SELECT
table_rows "Rows Count"
FROM
information_schema.tables
WHERE
table_name="Table_Name"
AND
table_schema="Database_Name";
Upvotes: 9
Reputation: 10015
If you need to get the count of the entire result set you can take following approach:
SELECT SQL_CALC_FOUND_ROWS * FROM table_name LIMIT 5;
SELECT FOUND_ROWS();
This isn't normally faster than using COUNT
albeit one might think the opposite is the case because it's doing the calculation internally and doesn't send the data back to the user thus the performance improvement is suspected.
Doing these two queries is good for pagination for getting totals but not particularly for using WHERE
clauses.
Upvotes: 8
Reputation: 2683
After speaking with my team-mates, Ricardo told us that the faster way is:
show table status like '<TABLE NAME>' \G
But you have to remember that the result may not be exact.
You can use it from command line too:
$ mysqlshow --status <DATABASE> <TABLE NAME>
More information: http://dev.mysql.com/doc/refman/5.7/en/show-table-status.html
And you can find a complete discussion at mysqlperformanceblog
Upvotes: 97
Reputation: 6258
Perhaps you may want to consider doing a SELECT max(Id) - min(Id) + 1
. This will only work if your Ids are sequential and rows are not deleted. It is however very fast.
Upvotes: 5
Reputation: 62573
I've always understood that the below will give me the fastest response times.
SELECT COUNT(1) FROM ... WHERE ...
Upvotes: 13