Reputation: 619
One of our database size is 50gb. Out of it one of the table has 149444622 records. Size of that table is 14GB and its indexes size is 16GB. Total size of the table and its indexes are 30GB. I have perfomred the below steps on that table.
reindex table table_name;
vacuum full verbose analyze on table_name;
But still the size of the table and its indexes size are not reduced. Please guid me. How to proceed further.
Structure of the table as below.
Upvotes: 0
Views: 1813
Reputation: 164819
14 GB for your data is not abnormal. Let's do the math.
Simply adding up the sizes of your columns gives 68 bytes per column.
2 bigints @ 8 bytes each 16 bytes
4 integers @ 4 bytes each 16 bytes
4 doubles @ 8 bytes each 32 bytes
1 date @ 4 bytes 4 bytes
--------
68 bytes
149,444,622 at 68 bytes each is about 9.7 GB. This is the absolute minimum size of your data if there were no database overhead. But there is overhead. This answer reckons its about 28 bytes per row. 68 + 28 is 96 bytes per row. That brings us to... 14.3 GB. Just what you have.
I doubt you can reduce the size without changing your schema, dropping indexes, or deleting data. If you provided more detail about your schema we could give suggestions, but I would suggest doing that as a new question.
Finally, consider that 50 GB is a pretty small database. For example, the smallest paid database offered by Heroku is 64 GB and just $50/month. This may be a situation where it's fine to just use a bigger disk.
Upvotes: 5