Reputation: 163
I have a small InfluxDB database running inside my K3S cluster.
As Storage Class I use Longhorn.
I know it's not optimal to run a database in Kubernetes, but this is only for some metric logging for Telegraf.
The problem is that in the pod the mounted volume is 200 MB big, but in Longhorn it's 2.5 GB big as actual size. The volume is only 1 day old. At this speed, my disk storage will be full soon.
Why is this? And is this something I can fix?
Upvotes: 0
Views: 1601
Reputation: 477
I suspect the reason for this is snapshots.
Longhorn volumes have different size "properties":
df -h
inside an attached pod or use a tool like df-pv to check usage (this is relevant when your volume is getting full)Longhorn keeps a history of previous changes to a volume as snapshots. you can either create them manually from the UI or create a RecurringJob that does that for you automatically.
Having many snapshots is problematic when a lot of data is (re-)written to a volume. Imagine the following scenario:
There's also an ongoing discussion about reclaiming space automatically
Upvotes: 2