Reputation: 6039
Our deployment's imagePullPolicy
wasn't set for a while, which means it used IfNotPresent
.
If I understand correctly, each k8s node stored the images locally so they can be reused on the next deployment if necessary.
Is it possible to list/show all the stored local images per node in an AKS cluster
Upvotes: 4
Views: 16142
Reputation: 104
Per doc the solution should be to list all images of all running containers (though this does not list pulled images that are not currently used) :
kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' |\
sort |\
uniq -c
Upvotes: -1
Reputation: 1721
For listing out the container images on AKS docker-cli is unavailable, use crictl instead. It is located at
ls /usr/local/bin
bpftrace crictl health-monitor.sh kubectl kubelet
Use crictl images to list out container images on AKS worker nodes.
Debugging on k8s with crictl
Other cri-tools info
Getting shell into a node with krew-plugins by using node-shell
Other Krew Plugins
Upvotes: 5
Reputation: 2160
Yes, you have to firstly check the node to which pod has been scheduled against that microservice.
kubectl -n namespace get pods -o wide
Once you get the node, try to setup an ssh connection with the node, this link can be used to do it.
Then you can execute following command in that VM
docker images
It will give you all docker images in that
Upvotes: -2
Reputation: 68
As docker is installed on every node of the k8s cluster, to list/show local images per node, you need login to the worker node and you could run :
docker images
This would give you the list of all the images on that particular node.
Upvotes: 2