Reputation: 1
I have 5 tables in my DB: user, log1, log2, log3 and log4. Each log saves data about the user (log1 is logins log, etc.) - each log has UserID and CreateTime and LogID as his columns.
I want to create a query that is fast enough, that will show me the number of times each log has this user id. something like:
select u.UserID, Count(log1.CreateTime) as l1, ... Count(log4.CreateTime) as l4
from users u left join log1 on log1.UserID = u.UserID left join ...
group by u.UserID
The problem is that this query take too much time.
Can anyone help me optimize my query?
Upvotes: 0
Views: 35
Reputation: 1271003
For this query:
select u.UserID, Count(log1.CreateTime) as l1, ... Count(log4.CreateTime) as l4
from users u left join
log1
on log1.UserID = u.UserID left join ...
group by u.UserID
You want indexes on the "log" tables: log1(UserId)
, etc.
However, I don't think this is what you are really looking for. With the same indexes, you should use correlated subqueries:
select u.*,
(select count(*) from log1 where log1.UserID = u.UserID) as log1_cnt,
. . .
from users u;
(This assumes that the creation date/time column is never NULL
.)
This is better for several reasons:
group by
in the outer query.Upvotes: 2
Reputation: 822
I think, you want to get statistics of your log records.
We have log databases, insert about 1 millon records per day. We create statistic database and add trigger on log tables for insert. So we update or insert statistic record for each log insert.
Upvotes: 0