Reputation: 268
I've got a table in which there are some columns with big text data. The query for 10 rows (table has only 31 records) takes more than 20 seconds. If I remove fields with big size, the query is executed quickly. The query for 1 row (by id) always executed quickly.
How can I do the query for many rows work more faster?
The query looks like this
SELECT DISTINCT (a.id), a.field_1, a.field_2, a.field_3
, a.field_4, a.field_5, a.filed_6, ...
FROM table_a a, table_b b
WHERE a.field_8 = 'o'
ORDER BY a.field_2 DESC
LIMIT 10;
Upvotes: 0
Views: 193
Reputation: 656391
@a_horse already hinted at the likely syntax error. Try:
SELECT DISTINCT ON (a.id) a.id, a.field_1, a.field_2, a.field_3, ... FROM table_a a -- JOIN table_b b ON ??? WHERE a.field_8 = 'o' ORDER BY a.id, a.field_2 DESC LIMIT 10;
Note the bold emphasis and read up on the DISTINCT clause in the manual.
Also, an index on field_8
might help.
A multicolumn index on (field_8, id, field_2)
might help even more, if you can narrow it down to that (and if that is the sort order you want, which I doubt).
If you want the result sorted by a.field_2 DESC
first:
In PostgreSQL 9.1, if id
is the primary key:
SELECT a.id, a.field_1, a.field_2, a.field_3, ...
FROM table_a a
-- JOIN table_b b ON ???
WHERE a.field_8 = 'o'
GROUP BY a.id -- primary key takes care of all columns in table a
ORDER BY a.field_2 DESC
LIMIT 10;
Upvotes: 1
Reputation: 7887
why you are selecting table_b? you dont join this tables! make a real join like this
SELECT DISTINCT
(a.id), a.field_1, a.field_2, a.field_3, a.field_4, a.field_5, a.filed_6
FROM table_a a
INNER JOIN table_b b
ON b.field_on_table_b = a.field_on_table_a
WHERE a.field_8 = 'o'
ORDER BY a.field_2 DESC LIMIT 10
then be sure that field_8 (in the where statement) is defined with a key!
Upvotes: 0