JonB65
JonB65

Reputation: 69

Improving the performance of a query

My background is Oracle but we've moved to Hadoop on AWS and I'm accessing our logs using Hive SQL. I've been asked to return a report where the number of high severity errors on the system of any given type exceeds 9 in any rolling period of 30 days (9 but I use 2 in the example to keep the example data volumes down) by uptime. I've written code to do this but I don't really understand performance tuning in Hive. A lot of the stuff I learned in Oracle doesn't seem applicable.

Can this be improved?

Data is roughly

CREATE TABLE LOG_TABLE
(SYSTEM_ID  VARCHAR(1),
 EVENT_TYPE VARCHAR(2),
 EVENT_ID   VARCHAR(3),
 EVENT_DATE DATE,
 UPTIME INT);

INSERT INOT LOG_TABLE
VALUES
('1','A1','138','2018-10-29',34),
('1','A2','146','2018-11-13',49),
('1','A3','140','2018-11-02',38),
('1','B1','130','2018-10-13',18),
('1','B1','150','2018-11-19',55),
('1','B2','137','2018-10-27',32),
('2','A1','128','2018-10-11',59),
('2','A1','131','2018-10-16',64),
('2','A1','136','2018-10-25',73),
('2','A2','139','2018-10-31',79),
('2','A2','145','2018-11-11',90),
('2','A2','147','2018-11-14',93),
('2','A3','135','2018-10-24',72),
('2','B1','124','2018-10-03',51),
('2','B1','133','2018-10-19',67),
('2','B2','134','2018-10-22',70),
('2','B2','142','2018-11-06',85),
('2','B2','148','2018-11-15',94),
('2','B2','149','2018-11-17',96),
('3','A2','127','2018-10-10',122),
('3','A3','123','2018-10-01',113),
('3','A3','125','2018-10-06',118),
('3','A3','126','2018-10-07',119),
('3','A3','141','2018-11-05',148),
('3','A3','144','2018-11-10',153),
('3','B1','132','2018-10-18',130),
('3','B1','143','2018-11-08',151),
('3','B2','129','2018-10-12',124);

and code that works is as follows. I do a self join on the log table to return all the records with the gap between them and include those with a gap of 30 days or less. I then select those where there are more than 2 events into a second cte and from these I count distinct event types and event ids by system and uptime range

WITH EVENTGAP AS  
(SELECT T1.EVENT_TYPE,
       T1.SYSTEM_ID,
       T1.EVENT_ID,
       T2.EVENT_ID AS EVENT_ID2,
       T1.EVENT_DATE,
       T2.EVENT_DATE AS EVENT_DATE2,
       T1.UPTIME,
       DATEDIFF(T2.EVENT_DATE,T1.EVENT_DATE) AS EVENT_GAP
FROM LOG_TABLE T1
  INNER JOIN LOG_TABLE T2
  ON (T1.EVENT_TYPE=T2.EVENT_TYPE
  AND T1.SYSTEM_ID=T2.SYSTEM_ID)
WHERE DATEDIFF(T2.EVENT_DATE,T1.EVENT_DATE) BETWEEN 0 AND 30
  AND T1.UPTIME BETWEEN 0 AND 299
  AND T2.UPTIME BETWEEN 0 AND 330),

 EVENTCOUNT
AS (SELECT EVENT_TYPE,
       SYSTEM_ID,
       EVENT_ID,
       EVENT_DATE,
       COUNT(1)
FROM EVENTGAP
GROUP BY EVENT_TYPE,
       SYSTEM_ID,
       EVENT_ID,
       EVENT_DATE
HAVING COUNT(1)>2)

SELECT EVENTGAP.SYSTEM_ID, 
       CASE WHEN FLOOR(UPTIME/50) = 0 THEN '0-49'
        WHEN FLOOR(UPTIME/50) = 1 THEN '50-99'
        WHEN FLOOR(UPTIME/50) = 2 THEN '100-149'
        WHEN FLOOR(UPTIME/50) = 3 THEN '150-199'
        WHEN FLOOR(UPTIME/50) = 4 THEN '200-249'
        WHEN FLOOR(UPTIME/50) = 5 THEN '250-299' END AS UPTIME_BAND,
       COUNT(DISTINCT EVENTGAP.EVENT_ID2) AS EVENT_COUNT, 
       COUNT(DISTINCT EVENTGAP.EVENT_TYPE) AS TYPE_COUNT 
FROM EVENTGAP
WHERE EVENTGAP.EVENT_ID IN (SELECT DISTINCT EVENTCOUNT.EVENT_ID FROM EVENTCOUNT)
GROUP BY EVENTGAP.SYSTEM_ID,
      CASE WHEN FLOOR(UPTIME/50) = 0 THEN '0-49'
        WHEN FLOOR(UPTIME/50) = 1 THEN '50-99'
        WHEN FLOOR(UPTIME/50) = 2 THEN '100-149'
        WHEN FLOOR(UPTIME/50) = 3 THEN '150-199'
        WHEN FLOOR(UPTIME/50) = 4 THEN '200-249'
        WHEN FLOOR(UPTIME/50) = 5 THEN '250-299' END

This gives the following result, which should be unique counts of event ids and event types that have 3 or more events falling in any rolling 30 day period. Some events may be in more than one period but will only be counted once.


EVENTGAP.SYSTEM_ID  UPTIME_BAND EVENT_COUNT TYPE_COUNT
2   50-99   10  3
3   100-149 4   1


Upvotes: 1

Views: 236

Answers (1)

Gordon Linoff
Gordon Linoff

Reputation: 1269763

In both Hive and Oracle, you would want to do this using window functions, using a window frame clause. The exact logic is different in the two databases.

In Hive you can use range between if you convert event_date to a number. A typical method is to subtract a fixed value from it. Another method is to use unix timestamps:

select lt.*
from (select lt.*,
             count(*) over (partition by event_type
                            order by unix_timestamp(event_date)
                            range between 60*24*24*30 preceding and current row
                           ) as rolling_count
      from log_table lt
     ) lt
where rolling_count >= 2  -- or 9

Upvotes: 3

Related Questions