Reputation: 89
I've read like 10 or so "tutorials", and they all involve the same thing:
IE:
SELECT COUNT(*)
FROM table
WHERE something = ?
SELECT *
FROM table
WHERE something =?
LIMIT ? offset ?`
Two very similar queries, no? There has to be a better way to do this, my dataset is 600,000+ rows and already sluggish (results are determined by over 30 where clauses, and vary from user to user, but are properly indexed of course).
Upvotes: 2
Views: 237
Reputation: 536
You could CREATE TABLE AS
and place all the results in a new table. You do have to manage the created tables though if TEMP tables are not an option.
Upvotes: 0
Reputation: 54882
Unfortunately, to get the exact count as it is at the moment of the query, postgresql has to go through all the rows that match the criteria and see if they are visible to your transaction. But you probably don't need the exact count, because results are stale anyway as soon as you send the results to the user. So there are things you may try:
WHERE sex='male' AND has_breasts=true
will assume that 25% will match, which is probably an order of magnitude off. If you run the explain with analyze, you can check how many rows the planner expected to have to go through to get the first page of results, how many it actually had to go through, and scale the estimate accordingly. This is probably somewhat similar to what google uses to estimate how many pages match your query. If I remember correctly Lucene should support similar estimation.Upvotes: 2
Reputation: 126991
Use the statistics for a count estimate. That will do for paginantion and won't give you much overhead.
See http://wiki.postgresql.org/wiki/Count_estimate
Upvotes: 1