Reputation: 1326
My table
Field Type Null Key Default Extra
id int(11) NO PRI NULL auto_increment
userid int(11) NO MUL NULL
title varchar(50) YES NULL
hosting varchar(10) YES NULL
zipcode varchar(5) YES NULL
lat varchar(20) YES NULL
long varchar(20) YES NULL
msg varchar(1000)YES MUL NULL
time datetime NO NULL
That is the table. I have simulated 500k rows of data and deleted randomly 270k rows to leave only 230k with an auto increment of 500k.
Here are my indexs
Keyname Type Unique Packed Field Cardinality Collation Null
PRIMARY BTREE Yes No id 232377 A
info BTREE No No userid 2003 A
lat 25819 A YES
long 25819 A YES
title 25819 A YES
time 25819 A
With that in mind , here is my query:
SELECT * FROM
posts
WHERElong
>-118.13902802886 ANDlong
<-118.08130797114 ANDlat
>33.79987197114 ANDlat
<33.85759202886 ORDER BY id ASC LIMIT 0, 25
Showing rows 0 - 15 (16 total, Query took 1.5655 sec) [id: 32846 - 540342]
The query only brought me 1 page, but because it had to search all 230k records it still took 1.5 seconds.
Here is the query explained:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE posts index NULL PRIMARY 4 NULL 25 Using where
So even if i use where clauses to only get back 16 results I still get a slow query.
Now for example if i do a broader search :
SELECT * FROM `posts` WHERE `long`>-118.2544681443 AND `long`<-117.9658678557 AND `lat`>33.6844318557 AND `lat`<33.9730321443 ORDER BY id ASC LIMIT 0, 25
Showing rows 0 - 24 (25 total, Query took 0.0849 sec) [id: 691 - 29818]
It is much faster when retrieving the first page out of 20 pages and 483 found total but i limit to 25.
but if i ask for the last page
SELECT * FROM `posts` WHERE `long`>-118.2544681443 AND `long`<-117.9658678557 AND `lat`>33.6844318557 AND `lat`<33.9730321443 ORDER BY id ASC LIMIT 475, 25
Showing rows 0 - 7 (8 total, Query took 1.5874 sec) [id: 553198 - 559593]
I get a slow query.
My question is how do I achieve good pagination? When the website goes live I expect when it takes off that posts will be deleted and made daily by the hundreds. Posts should be ordered by id or timestamp and Id is not sequential because some records will be deleted. I want to have a standard pagination
1 2 3 4 5 6 7 8 ... [Last Page]
Upvotes: 5
Views: 1679
Reputation: 1326
I figured it out. What was slowing me down is order by. Since I would call a limit and the the further down I asked to go the more it had to sort. So then i fixed it by adding a subquery to first extract the data I want with WERE
clause then I used ORDER BY
and LIMIT
SELECT * FROM
(SELECT * from `posts` as `p`
WHERE
`p`.`long`>-119.2544681443
AND `p`.`long`<-117.9658678557
AND `p`.`lat`>32.6844318557 A
ND `p`.`lat`<34.9730321443
) as posttable
order by id desc
limit x,n
By doing that I achieved the following:
id select_type table type possible_keys key key_len ref rows Extra
1 PRIMARY <derived2> ALL NULL NULL NULL NULL 3031 Using filesort
2 DERIVED p ALL NULL NULL NULL NULL 232377 Using where
Now I filter 232k results using "where" and only orderby and limit 3031 results.
Showing rows 0 - 3030 (3,031 total, Query took 0.1431 sec)
Upvotes: 0
Reputation: 7141
If you are using AUTO INCREMENT you may use:
SELECT *
FROM
posts
WHERE
id>= 200000 ORDER BY
idDESC
LIMIT 200000 , 30
This way mysql will have to traverse only rows above 200000.
Upvotes: 0
Reputation: 33532
Mysql loses quite a bit of performance with a large offset: from the mysqlPerformance blog:
Beware of large LIMIT Using index to sort is efficient if you need first few rows, even if some extra filtering takes place so you need to scan more rows by index then requested by LIMIT. However if you’re dealing with LIMIT query with large offset efficiency will suffer. LIMIT 1000,10 is likely to be way slower than LIMIT 0,10. It is true most users will not go further than 10 page in results, however Search Engine Bots may very well do so. I’ve seen bots looking at 200+ page in my projects. Also for many web sites failing to take care of this provides very easy task to launch a DOS attack – request page with some large number from few connections and it is enough. If you do not do anything else make sure you block requests with too large page numbers.
For some cases, for example if results are static it may make sense to precompute results so you can query them for positions. So instead of query with LIMIT 1000,10 you will have WHERE position between 1000 and 1009 which has same efficiency for any position (as long as it is indexed)
Upvotes: 0
Reputation: 1767
it looks like you only have a primary key index. you might want to define an index on the fields you use, such as:
create index idx_posts_id on posts (`id` ASC);
create index idx_posts_id_timestamp on posts (`id` ASC, `timestamp` ASC);
having a regular index on your key field, besides your primary unique key index, usually helps speed up mysql, by, A LOT.
Upvotes: 0
Reputation: 4967
Few remarks.
given that you order by id
, it means that on each page you have id for first and last record, so rather than limit 200000, you should use where id > $last_id limit 20 and that would be blazingly fast.
drawback is obviously that you cannot offer "last" page or any page in between, if id's are not sequential (deleted in between). you may then use combination of the last known id and offset + limit combination.
and obviously, having proper indexes will also help sorting and limiting.
Upvotes: 0
Reputation: 126
unfortunately mysql has to read [and earlier sort] all the 20000 rows before it outputs your 30 results. if you can try narrowing down your search using filtering on indexed columns within WHERE clause.
Upvotes: 0
Reputation: 125955
Filter from your results records which appeared on earlier pages by using a WHERE
clause: then you do not need to specify an offset, only a row count. For example, keep track of the last id or timestamp seen and filter for only those records with id or timestamp greater than that.
Upvotes: 2