Saawan
Saawan

Reputation: 383

Performance improvement for fetching records from a Table of 10 million records in Postgres DB

I have a analytic table that contains 10 million records and for producing charts i have to fetch records from analytic table. several other tables are also joined to this table and data is fetched currently But it takes around 10 minutes even though i have indexed the joined column and i have used Materialized views in Postgres.But still performance is very low it takes 5 mins for executing the select query from Materialized view.

Please suggest me some technique to get the result within 5sec. I dont want to change the DB storage structure as so much of code changes has to be done to support it. I would like to know if there is some in built methods for query speed improvement.

Thanks in Advance

Upvotes: 0

Views: 291

Answers (1)

WeeniehuahuaXD
WeeniehuahuaXD

Reputation: 852

In general you can take care of this issue by creating a better data structure(Most engines do this to an extent for you with keys).

But if you were to create a sorting column of sorts. and create a tree like structure then you'd be left to a search rate of (N(log[N]) rather then what you may be facing right now. This will ensure you always have a huge speed up in your searches.

This is in regards to binary tree's, Red-Black trees and so on.

Another implementation for a speedup may be to make use of something allong the lines of REDIS, ie - a nice database caching layer.

For analytical reasons in the past I have also chosen to make use of technologies related to hadoop. Though this may be a larger migration in your case at this point.

Upvotes: 1

Related Questions