Reputation: 7706
I do have deployment with single pod, with my custom docker image like:
containers:
- name: mycontainer
image: myimage:latest
During development I want to push new latest version and make Deployment updated. Can't find how to do that, without explicitly defining tag/version and increment it for each build, and do
kubectl set image deployment/my-deployment mycontainer=myimage:1.9.1
Upvotes: 269
Views: 398814
Reputation: 173
I was looking for a single command which would just update the image. I have only one container and passing the container name is a hassle as it will change all the time after the deployment. This generic command just updates the image. I usually use it for testing and I push my image with the tag having a number at the end.
Syntax:
kubectl set image deploy/<deployment_name> '*=<imageUrl>:<tag>'
Explanation:
*=: Means all the containers in the deployment.
'*=...': It is wrapped in quotes to prevent zsh from evaluating the '*' character.
Example:
kubectl set image deploy/my-deployment '*=ghcr.io/myrepo/myproj:fixes36'
Upvotes: -1
Reputation: 1685
We could update it using the following command:
kubectl set image deployment/<<deployment-name>> -n=<<namespace>> <<container_name>>=<<your_dockerhub_username>>/<<image_name you want to set now>>:<<tag_of_the_image_you_want>>
For example,
kubectl set image deployment/my-deployment -n=sample-namespace my-container=alex/my-sample-image-from-dockerhub:1.1
where:
kubectl set image deployment/my-deployment
- deployment name which we would like to change-n=sample-namespace
- sample-namespace
namespace where this deployment belongs to. If the deployment belongs to the default namespace, then no need to mention this part in your commandmy-container
- the container name which was previously mentioned in the YAML file of the original deployment configurationalex/my-sample-image-from-dockerhub:1.1
is the new image which you want to set for the deployment and run the container for. Here, alex
is the username of the dockerhub image(if applicable), my-sample-image-from-dockerhub:1.1
the image and tag you want to useUpvotes: 15
Reputation: 22228
Another option which is more suitable for debugging but worth mentioning is to check in revision history of your rollout:
$ kubectl rollout history deployment my-dep
deployment.apps/my-dep
REVISION CHANGE-CAUSE
1 <none>
2 <none>
3 <none>
To see the details of each revision, run:
kubectl rollout history deployment my-dep --revision=2
And then returning to the previous revision by running:
$kubectl rollout undo deployment my-dep --to-revision=2
And then returning back to the new one.
Like running ctrl+z -> ctrl+y
(:
(*) The CHANGE-CAUSE is <none>
because you should run the updates with the --record
flag - like mentioned here:
kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record
(**) There is a discussion regarding deprecating this flag.
Upvotes: 8
Reputation: 7713
UPDATE 2019-06-24
Based on the @Jodiug comment if you have a 1.15
version you can use the command:
kubectl rollout restart deployment/demo
Read more on the issue:
https://github.com/kubernetes/kubernetes/issues/13488
Well there is an interesting discussion about this subject on the kubernetes GitHub project. See the issue: https://github.com/kubernetes/kubernetes/issues/33664
From the solutions described there, I would suggest one of two.
1.Prepare deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: demo
spec:
replicas: 1
template:
metadata:
labels:
app: demo
spec:
containers:
- name: demo
image: registry.example.com/apps/demo:master
imagePullPolicy: Always
env:
- name: FOR_GODS_SAKE_PLEASE_REDEPLOY
value: 'THIS_STRING_IS_REPLACED_DURING_BUILD'
2.Deploy
sed -ie "s/THIS_STRING_IS_REPLACED_DURING_BUILD/$(date)/g" deployment.yml
kubectl apply -f deployment.yml
kubectl patch deployment web -p \
"{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
Of course the imagePullPolicy: Always
is required on both cases.
Upvotes: 192
Reputation: 46
I am using Azure DevOps to deploy the containerize applications, I am easily manage to overcome this problem by using the build ID
Everytime its builds and generate the new Build ID, I use this build ID as tag for docker image here is example
imagename:buildID
once your image is build (CI) successfully, in CD pipeline in deployment yml file I have give image name as
imagename:env:buildID
here evn:buildid is the azure devops variable which having value of build ID.
so now every time I have new changes to build(CI) and deploy(CD).
please comment if you need build definition for CI/CD.
Upvotes: 1
Reputation: 3941
kubectl rollout restart deployment myapp
This is the current way to trigger a rolling update and leave the old replica sets in place for other operations provided by kubectl rollout
like rollbacks.
Upvotes: 56
Reputation: 476
I use Gitlab-CI to build the image and then deploy it directly to GCK. If use a neat little trick to achieve a rolling update without changing any real settings of the container, which is changing a label to the current commit-short-sha.
My command looks like this:
kubectl patch deployment my-deployment -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"build\":\"$CI_COMMIT_SHORT_SHA\"}}}}}}"
Where you can use any name and any value for the label as long as it changes with each build.
Have fun!
Upvotes: 22
Reputation: 11418
It seems that k8s expects us to provide a different image tag for every deployment. My default strategy would be to make the CI system generate and push the docker images, tagging them with the build number: xpmatteo/foobar:456
.
For local development it can be convenient to use a script or a makefile, like this:
# create a unique tag
VERSION:=$(shell date +%Y%m%d%H%M%S)
TAG=xpmatteo/foobar:$(VERSION)
deploy:
npm run-script build
docker build -t $(TAG) .
docker push $(TAG)
sed s%IMAGE_TAG_PLACEHOLDER%$(TAG)% foobar-deployment.yaml | kubectl apply -f - --record
The sed
command replaces a placeholder in the deployment document with the actual generated image tag.
Upvotes: 9
Reputation: 8436
You can configure your pod with a grace period (for example 30 seconds or more, depending on container startup time and image size) and set "imagePullPolicy: "Always"
. And use kubectl delete pod pod_name
.
A new container will be created and the latest image automatically downloaded, then the old container terminated.
Example:
spec:
terminationGracePeriodSeconds: 30
containers:
- name: my_container
image: my_image:latest
imagePullPolicy: "Always"
I'm currently using Jenkins for automated builds and image tagging and it looks something like this:
kubectl --user="kube-user" --server="https://kubemaster.example.com" --token=$ACCESS_TOKEN set image deployment/my-deployment mycontainer=myimage:"$BUILD_NUMBER-$SHORT_GIT_COMMIT"
Another trick is to intially run:
kubectl set image deployment/my-deployment mycontainer=myimage:latest
and then:
kubectl set image deployment/my-deployment mycontainer=myimage
It will actually be triggering the rolling-update but be sure you have also imagePullPolicy: "Always"
set.
Update:
another trick I found, where you don't have to change the image name, is to change the value of a field that will trigger a rolling update, like terminationGracePeriodSeconds
. You can do this using kubectl edit deployment your_deployment
or kubectl apply -f your_deployment.yaml
or using a patch like this:
kubectl patch deployment your_deployment -p \
'{"spec":{"template":{"spec":{"terminationGracePeriodSeconds":31}}}}'
Just make sure you always change the number value.
Upvotes: 279