Reputation: 2038
This query takes 11 seconds to complete. I've tried removing a few fields to make it go faster without luck. Is there a way to optimize this?
select
max(ig.group_name) as item_code,
MAX(ig.description) as item_desc,
(SUM(oi.qty * ip1.amount) / MAX(ig.amount)) as qtySum,
MAX(ig.unit) as unit, MAX(ip1.cat) as cat,
max(i.ItemRoute) as ItemRoute
from
order_items oi
inner join orders o on o.localID = oi.local_order_id
inner join items_pc ip1 on (ip1.item = oi.item_code and ip1.unit = oi.unit)
inner join item_groups ig on ig.id = ip1.ItemGroup
inner join items i on i.item = ip1.item
inner join customers c on o.customer_id = c.id
where
oi.qty > 0 and
o.status = 'submitted' and
i.ItemGroup is not null and
o.delivery_date between '7/27/2014' and '7/30/2014'
group by ip1.ItemGroup
order by ItemRoute, cat, item_code
Upvotes: 0
Views: 93
Reputation: 47068
Impossible to say without having your real data, but the workflow to optimize is roughly
As a general rule, try to reduce the number of rows to scan as early as possible in your queries.
Reducing early is a bit hard to explain, but if you have three tables, A, B and C with each 1M rows. If you have a where condition that gives 100k rows on A, 100k rows on B and 1k rows on C, if you experience that A is joined with B before the result of that is joined with C you will probably look up something like 100k rows, but if you get C joined with A first before the result is joined with B you will probably look up something like 1k rows.
Likewise when you have a table with columns A and B, and your where clause includes both A and B. If the filter on A returns 100k rows and the filter on B returns 1k rows you want an index(B, A), not index(A, B) to reduce the search space as quickly as possible.
Upvotes: 3