Reputation: 3869
I have table DB_TBL
where historical data are inserted from tool on 5th of every month. As its historical data the number of rows inserted is very large. For example i checked the count for this month and its 1626521.
This table contains 50 columns. But i am interested to check only 5 columns. There is gui from the front end which is using this 5 columns and where i am providing the filter to get the one month data everytime for example for previous month as closure_date between 01.01.2020 and 31.01.2020.
Select Client_key, Portfolia_Name, Closure_date,
Currency_Type, Balance from DB_TBL
where CLOSURE_DATE between
to_date ('01.01.2020','dd.mm.yyyy') and to_date ('31.01.2020','dd.mm.yyyy');
This query is very slow and takes more than 30 minute to run. This problem is very critical as the same issue i faced from the gui. I have provided index to Closure_date
column. But still it does not improve performance.
Is there any way i can improve query performance ? May be i can create Virtual based index
or create View
?
Upvotes: 1
Views: 1478
Reputation: 1271131
First, I would write the query using date literals:
Select Client_key, Portfolia_Name, Closure_date, Currency_Type, Balance
from DB_TBL
where CLOSURE_DATE between date '2020-01-01' and date '2020-01-31'
For this query, you want an index on db_tbl(closure_date)
. If you already have this index, then presumably you have such a large volume of data that it takes a lot of time to return the data. You can change the index to a covering index by including the rest of the columns in the index (after CLOSURE_DATE
, which should be first).
That said, if you want all data from January 2020, I would recommend:
where CLOSURE_DATE >= date '2020-01-01' and
CLOSURE_DATE < date '2020-02-01'
This will return all rows, even those that have a time component. This is particularly important in Oracle, because the date
data type can have a time component -- even though user interfaces often show only the date.
Upvotes: 1