Reputation: 12134
I have a pod test-1495806908-xn5jn
with 2 containers. I'd like to restart one of them called container-test
. Is it possible to restart a single container within a pod and how? If not, how do I restart the pod?
The pod was created using a deployment.yaml
with:
kubectl create -f deployment.yaml
Upvotes: 218
Views: 456542
Reputation: 9133
Now, there is a kubectl rollout restart
sub-command added in Kubernetes 1.15.
Examples:
# Restart all deployments in test-namespace namespace
kubectl rollout restart deployment -n test-namespace
# Restart a deployment
kubectl rollout restart deployment/nginx
# Restart a daemon set
kubectl rollout restart daemonset/abc
# Restart deployments with the app=nginx label
kubectl rollout restart deployment --selector=app=nginx
See also, https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_restart/.
Both pod and container are ephemeral, try to use the following command to stop the specific container and the k8s cluster will restart a new container.
kubectl exec -it [POD_NAME] -c [CONTAINER_NAME] -- /bin/sh -c "kill 1"
This will send a SIGTERM
signal to process 1, which is the main process running in the container. All other processes will be children of process 1, and will be terminated after process 1 exits. See the kill manpage for other signals you can send.
Upvotes: 102
Reputation: 2453
Restarting a single container is very much possible, but the ways might differ depending on the cluster configuration. In general you need these two conditions:
restartPolicy: Always
(default for most workloads, so that's easy)Remember that a container is just a little fence (namespace) around a process (and network, filesystem, etc.) on someone's (usually pretty generic) Linux server (that's called a node
in the Kubernetes jargon). This means:
root
at the node where the container runs, you can wreak havoc I surely don't need to describe, right? But most of the time you are not. And that's a good thing.kubectl exec
to it
kubectl exec -it $POD -c $CONTAINER -- sh -c 'kill 1'
This will send a TERM
signal to the process 1
, after which most well-behaved processes die more or less gracefully. Process 1
is usually the container's main process (or sometimes not), after whose death the container gets torn down and kubelet
takes over restarting it again. If the process is stubborn, you might try kill -9 1
, which should kill it for good, but again YMMV.kubectl debug -it --image=busybox $POD --target=$CONTAINER -- sh -c 'kill 1'
This will create a new "ephemeral container" in the pod that will land pretty much in the same fence as the --target
one's. This allows you to see the target process and kill it, doing so with tools available in the supplied --image
. I used busybox
as that's tiny and gets the job done, but you sure can use any fatter image like ubuntu
.Upvotes: 6
Reputation: 679
kubectl delete pods POD_NAME
This command will delete the pod and restart another automatically.
Upvotes: -2
Reputation: 3866
Sometimes no one knows which OS the pod has, pod might not have sudo
or reboot
altogether.
Safer option is to take a snapshot and recreate pod.
kubectl get <pod-name> -o yaml > pod-to-be-restarted.yaml;
kubectl delete po <pod-name>;
kubectl create -f pod-to-be-restarted.yaml
Upvotes: -1
Reputation: 996
The correct, but likely less popular answer, is that if you need to restart one container in a pod then it shouldn't be in the same pod. You can't restart single containers in a pod by design. Just move the container out into it's own pod. From the docs
Pods that run a single container. The "one-container-per-Pod" model is the most common Kubernetes use case; in this case, you can think of a Pod as a wrapper around a single container; Kubernetes manages Pods rather than managing the containers directly.
Note: Grouping multiple co-located and co-managed containers in a single Pod is a relatively advanced use case. You should use this pattern only in specific instances in which your containers are tightly coupled.
https://kubernetes.io/docs/concepts/workloads/pods/
Upvotes: 1
Reputation: 21
I realize this question is old and already answered, but I thought I'd chip in with my method.
Whenever I want to do this, I just make a minor change to the pod's container's image field, which causes kubernetes to restart just the container.
If you can't switch between 2 different, but equivalent tags (like :latest
/ :1.2.3
where latest is actually version 1.2.3) then you can always just switch it quickly to an invalid tag (I put an X at the end like :latestX
or something) and then re-edit it and remove the X straight away afterwards, this does cause the container to fail starting with an image pull error for a few seconds though.
So for example:
kubectl edit po my-pod-name
Find the spec.containers[].name
you want to kill, then find it's image
apiVersion: v1
kind: Pod
metadata:
#...
spec:
containers:
- name: main-container
#...
- name: container-to-restart
image: container/image:tag
#...
You would search for your container-to-restart and then update it's image to something different which will force kubernetes to do a controlled restart for you.
Upvotes: 2
Reputation: 912
kubectl rollout restart deployment [deployment_name]
or
kubectl delete pod [pod_name]
Upvotes: 29
Reputation: 460
I was playing around with ways to restart a container. What I found for me was this solution:
Dockerfile:
...
ENTRYPOINT [ "/app/bootstrap.sh" ]
/app/bootstrap.sh:
#!/bin/bash
/app/startWhatEverYouActuallyWantToStart.sh &
tail -f /dev/null
Whenever I want to restart the container, I kill the process with tail -f /dev/null
which I find with
kill -TERM `ps --ppid 1 | grep tail | grep -v -e grep | awk '{print $1}'`
Following that command, all the processes except for the one with PID==1
will be killed and the entrypoint, in my case bootstrap.sh
will be executed (again).
That's for the part "restart" - which is not really a restart but it does what you wish, in the end. For the part with limiting restarting the container named container-test
you could pass on the container name to the container in question (as the container name would otherwise not be available inside the container) and then you can decide whether to do the above kill
.
That would be something like this in your deployment.yaml
:
env:
- name: YOUR_CONTAINER_NAME
value: container-test
/app/startWhatEverYouActuallyWantToStart.sh:
#!/bin/bash
...
CONDITION_TO_RESTART=0
...
if [ "$YOUR_CONTAINER_NAME" == "container-test" -a $CONDITION_TO_RESTART -eq 1 ]; then
kill -TERM `ps --ppid 1 | grep tail | grep -v -e grep | awk '{print $1}'`
fi
Upvotes: 0
Reputation: 1501
There are cases when you want to restart a specific container instead of deleting the pod and letting Kubernetes recreate it.
Doing a kubectl exec POD_NAME -c CONTAINER_NAME /sbin/killall5
worked for me.
(I changed the command from reboot
to /sbin/killall5
based on the below recommendations.)
Upvotes: 121
Reputation: 523
kubectl exec -it POD_NAME -c CONTAINER_NAME bash - then kill 1
Assuming the container is run as root which is not recommended.
In my case when I changed the application config, I had to reboot the container which was used in a sidecar pattern, I would kill the PID for the spring boot application which is owned by the docker user.
Upvotes: 4
Reputation: 1493
All the above answers have mentioned deleting the pod...but if you have many pods of the same service then it would be tedious to delete each one of them...
Therefore, I propose the following solution, restart:
1) Set scale to zero :
kubectl scale deployment <<name>> --replicas=0 -n service
The above command will terminate all your pods with the name <<name>>
2) To start the pod again, set the replicas to more than 0
kubectl scale deployment <<name>> --replicas=2 -n service
The above command will start your pods again with 2 replicas.
Upvotes: 15
Reputation: 2460
There was an issue in coredns
pod, I deleted such pod by
kubectl delete pod -n=kube-system coredns-fb8b8dccf-8ggcf
Its pod will restart automatically.
Upvotes: 2
Reputation: 2893
Killing the process specified in the Dockerfile's CMD
/ ENTRYPOINT
works for me. (The container restarts automatically)
Rebooting was not allowed in my container, so I had to use this workaround.
Upvotes: 1
Reputation: 559
We use a pretty convenient command line to force re-deployment of fresh images on integration pod.
We noticed that our alpine containers all run their "sustaining" command on PID 5. Therefore, sending it a SIGTERM
signal takes the container down. imagePullPolicy
being set to Always
has the kubelet re-pull the latest image when it brings the container back.
kubectl exec -i [pod name] -c [container-name] -- kill -15 5
Upvotes: 5
Reputation: 4757
The whole reason for having kubernetes is so it manages the containers for you so you don't have to care so much about the lifecyle of the containers in the pod.
Since you have a deployment
setup that uses replica set
. You can delete the pod using kubectl delete pod test-1495806908-xn5jn
and kubernetes will manage the creation of a new pod with the 2 containers without any downtime. Trying to manually restart single containers in pods negates the whole benefits of kubernetes.
Upvotes: 25
Reputation: 33158
Is it possible to restart a single container
Not through kubectl
, although depending on the setup of your cluster you can "cheat" and docker kill the-sha-goes-here
, which will cause kubelet to restart the "failed" container (assuming, of course, the restart policy for the Pod says that is what it should do)
how do I restart the pod
That depends on how the Pod was created, but based on the Pod name you provided, it appears to be under the oversight of a ReplicaSet, so you can just kubectl delete pod test-1495806908-xn5jn
and kubernetes will create a new one in its place (the new Pod will have a different name, so do not expect kubectl get pods
to return test-1495806908-xn5jn
ever again)
Upvotes: 214