Reputation: 117
SELECT COUNT(*) AS count_all, products.id AS products_id
FROM `products`
INNER JOIN `product_device_facilities`
ON `product_device_facilities`.`product_id` = `products`.`id`
INNER JOIN `product_vendors`
ON `product_vendors`.`ProductId` = `products`.`id`
INNER JOIN `screen_shots`
ON `screen_shots`.`ProductVendorId` = `product_vendors`.`id`
WHERE ( (DownloadCount >= 10 or DownloadCount is NULL)
and (Minsdk <= 10 or Minsdk is null))
GROUP BY products.id
HAVING GROUP_CONCAT(device_facility_id ORDER BY device_facility_id ASC ) IN (0)
This is taking 10 seconds for 100k records.
How to improve the performance?
Upvotes: 1
Views: 4466
Reputation: 189
There are a few things that you can try.
and if all else fails
EDIT
Upvotes: 4
Reputation: 6943
Are minsdk and download count in the same table? If so adding an index on those two might help.
It could be that it's just a hard/impossible query to do quickly. Without seeing your full schema and the data it's hard to be specific, but it's possible that splitting it up into several easier to execute queries would be faster. Or as Amadeus suggested maybe denormalize the data a bit.
Another variation would be to just live with it taking 10 seconds, but make sure it's always done periodically in the background (with cron or similar) and never while a user is waiting. Then take the time to fix it if/when it takes minutes instead of seconds or otherwise puts an unacceptable burden on your user experience or servers.
Upvotes: 1
Reputation: 5380
Do indexing on join columns on tables and instead of count(*) use count(some indexed primary key column).
Upvotes: 1