Reputation: 761
I have a service on my kubernetes cluster that generates massive assets to my machine's hard disk. Some of that information could also be served statically by a different service in my system. The save location is mapped to an actual folder on my disk.
I already found that I can see some information about my "ephemral" storage capacity and allocatbility through kubectl describe node
but the data doesn't align with what I see when I run df -h
on my machine's terminal. on the node, I can see that I could allocate 147GB and on my terminal I can see that I could only allocate 98GB (this means we probably have some reserved space by Kubernetes in our deployments). I would like my metrics to reflect the actual state of the hard drive.
My Question:
How do I check through Kubernetes's python package what's the status of the storage on my machine without mounting my root path into the relevant container? Is there an API to the metrics service that shows me the status of my machine's actual storage? I tried looking at the Python API and couldn't find it. What am I missing?
Upvotes: 0
Views: 2385
Reputation: 54247
Kubernetes does not track overall storage available. It only knows things about emptyDir volumes and the filesystem backing those. If you're using a hostPath mount (which it sounds like you are), that is outside of Kube's view of the world. You can use something like node_exporter to gather those statistics yourself.
Upvotes: 1