Reputation: 130
I'm deploying several services to my local cluster (minikube
) using DevSpace tool. Once someone makes changes to one of the services and pushes the image to our private repo, I need these changes to be available on my local then. What I do now is I completely delete minikube
cluster and start a new one. In this case all images with same tags are just updated with the latest version, not a cached one.
But I believe there is some more elegant way to overcome this. So, I need to cleanup/remove/delete outdated images from my local cluster somehow before re-deploying services there.
Can someone point where they are stored, how I can review and remove them? Thanks.
Upvotes: 4
Views: 31507
Reputation: 567
Here is an alternative approach that I use in a build script.
The main advantage is that the command is synchronous so you can include it in something like a build script and not require your cluster to download the image every time just to allow updates occasionally during rebuilds.
# an example of building a new image and uploading it for reload in k8s
# in the question, this is already done
docker build -t jamesandariese/my-cool-image:latest .
docker push jamesandariese/my-cool-image:latest
# reload image in k8s, ignoring cached image
kubectl run \
--image=jamesandariese/my-cool-image:latest \
--image-pull-policy=Always \
--restart=Never \
--rm=true \
-i download-image --command -- true
if [ $? -eq 0 ];then
1>&2 echo "all pods launched with this image tag will now use the updated image"
else
1>&2 echo "FAILED TO REFRESH IMAGE. See error from kubectl"
fi
This method works based on a few facts:
minikube runs in a single k8s node (also works on other single node clusters like k3s)
running a new pod with the image-to-update and an image pull policy of always causes the new image to be downloaded
overriding the command to true and attaching (via -i), we get an exit code matching whether we successfully updated the image
Once kubectl
exits with a success, the image is updated in minikube.
NOTE: This can be done across whole clusters with a DaemonSet
and kubectl wait
but the focus by the time you've got a full cluster, your SDLC should be focusing less on how to write a script to do this and more on how to tag things properly so they aren't always :latest
or :prod
. This shift is important to allow predictable rollbacks and multiple versions running at the same time in a cluster.
Upvotes: 2
Reputation: 969
DevSpace maintainer here. What you need is 2 things:
devspace dev
. So, if you are using a Deployment or StatefulSet, you can add something like a label, e.g. containing the DevSpace built-in timestamp variable as value to your pod template.imagePullPolicy: Always
in your pod spec to ensure that Kubernetes always pulls the newest image for each newly created pod. Otherwise Kubernetes would use the already cached image.In combination, this could look like this within your devspace.yaml
file (if you are using the component-chart deployment):
deployments:
- name: my-component
helm:
componentChart: true
values:
labels:
timestamp: $!{DEVSPACE_TIMESTAMP} # here is 1.
containers:
- image: "YOUR_IMAGE:latest" # specify any tag here that you want
imagePullPolicy: Always # here is 2.
$!{DEVSPACE_TIMESTAMP}
= $!{}
forces to place the value of this var as a string (because k8s only allows string values for labels) and DEVSPACE_TIMESTAMP
is the name of a predefined variable in DevSpace. More details here: https://devspace.sh/cli/docs/configuration/variables/basics#predefined-variables
Upvotes: 8
Reputation: 763
You can try below commands
Removing untagged images:
docker image rm $(docker images | grep "^<none>" | awk "{print $3}")
Remove all stopped containers :
docker container rm $(docker ps -a -q)
(OR)
You need to stop and disable localkube service:
systemctl disable localkube.service
systemctl stop localkube.service
After that you're able to stop and remove containers.
docker system prune -a
which removes all the images
Upvotes: 0
Reputation:
Instead of deleting all images, and recreating the cluster, you can perform a rolling update (this is assuming you are using deployments, as you should).
kubectl set image deployments/<deployment-name>=<repository-name>/<image-name>:<image-tag>
This is also assuming, you are using proper versioning with tags.
Alternatively, if you are using images with the latest
tag, you can change ImagePullPolicy
to Always
, then delete the neccessary pods with
kubectl delete pod <pod-name> <pod2-name> ...
Newer image will be pulled as the new pods are created.
If you still want to delete unused docker images, you can do this with
docker image prune -a
This will remove all images without at least one container associated to them.
Upvotes: 1