Reputation: 79
I'm currently trying to move our company's squid server to a dockerized version and I'm struggling to get it working with Kubernetes.
I have built a Docker image that works perfectly fine when run with "docker run". The complete Docker Run command is:
sudo docker run -d -i -t --privileged --volume=/proc/sys/net/ipv4/ip_nonlocal_bind:/var/proc/sys/net/ipv4/ip_nonlocal_bind --net=host --cap-add=SYS_MODULE --cap-add=NET_ADMIN --cap-add=NET_RAW -v /dev:/dev -v /lib/modules:/lib/modules -p80:80 -p8080:8080 -p53:53/udp -p5353:5353/udp -p5666:5666/udp -p4500:4500/udp -p500:500/udp -p3306:3306 --name=edge crossense/edge:latest /bin/bash
When I try to run the Image with Kubernetes, with the something like:
kubectl run --image=crossense/edge:latest --port=80 --port=8080 --port=53 --port=5353 --port=5666 --port=4500 --port=500 --port=3306 edge
seems like Kubernetes tries to get the container up and running, but without any success...
$kubectl get po
NAME READY REASON RESTARTS AGE
edge-sz7wp 0/1 Running 10 15m
And the $kubectl describe pod edge
command gives me lots of these:
Thu, 09 Nov 2017 17:13:05 +0000 Thu, 09 Nov 2017 17:13:05 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id abcc2ff25a624a998871e02bcb62d42d6f39e9db0a39f601efa4d357dd8334aa
Thu, 09 Nov 2017 17:13:15 +0000 Thu, 09 Nov 2017 17:13:15 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id 878778836bd3cc25bdf1e3b9cc2f2f6fa22b75b938a481172f08a6ec50571582
Thu, 09 Nov 2017 17:13:15 +0000 Thu, 09 Nov 2017 17:13:15 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id 878778836bd3cc25bdf1e3b9cc2f2f6fa22b75b938a481172f08a6ec50571582
Thu, 09 Nov 2017 17:13:25 +0000 Thu, 09 Nov 2017 17:13:25 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id aa51e94536216b905ff9ba07951fedbc0007476b55dfdb2e5106418fb6aee05c
Thu, 09 Nov 2017 17:13:25 +0000 Thu, 09 Nov 2017 17:13:25 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id aa51e94536216b905ff9ba07951fedbc0007476b55dfdb2e5106418fb6aee05c
Thu, 09 Nov 2017 17:13:35 +0000 Thu, 09 Nov 2017 17:13:35 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id f4661e5ea33471cd1ba30816b40c8ba2d204fa22509b973da4af6eedb64c592e
Thu, 09 Nov 2017 17:13:35 +0000 Thu, 09 Nov 2017 17:13:35 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id f4661e5ea33471cd1ba30816b40c8ba2d204fa22509b973da4af6eedb64c592e
Thu, 09 Nov 2017 17:13:45 +0000 Thu, 09 Nov 2017 17:13:45 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id 75f83dcb9b4f8af5134d6fd2edcd9342ecf56111e132a45f4e9787e83466e28b
Thu, 09 Nov 2017 17:13:45 +0000 Thu, 09 Nov 2017 17:13:45 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id 75f83dcb9b4f8af5134d6fd2edcd9342ecf56111e132a45f4e9787e83466e28b
Thu, 09 Nov 2017 17:13:55 +0000 Thu, 09 Nov 2017 17:13:55 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id c9d0535b3962ec9da29c068dbb0a6b64426a5ac3e52f72e79bcbaf03c9f3d403
Thu, 09 Nov 2017 17:13:55 +0000 Thu, 09 Nov 2017 17:13:55 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id c9d0535b3962ec9da29c068dbb0a6b64426a5ac3e52f72e79bcbaf03c9f3d403
Thu, 09 Nov 2017 17:14:05 +0000 Thu, 09 Nov 2017 17:14:05 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id 579f4428e9804404bd746cceee88bb6c73066a33263202bb5f1eb15f6ff26d7b
Thu, 09 Nov 2017 17:14:05 +0000 Thu, 09 Nov 2017 17:14:05 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id 579f4428e9804404bd746cceee88bb6c73066a33263202bb5f1eb15f6ff26d7b
Thu, 09 Nov 2017 17:14:15 +0000 Thu, 09 Nov 2017 17:14:15 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id d36b2c9ddf0b1a05d86b43d2a92eb3c00ae92d00e155d5a1be1da8e2682f901b
Thu, 09 Nov 2017 17:14:15 +0000 Thu, 09 Nov 2017 17:14:15 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id d36b2c9ddf0b1a05d86b43d2a92eb3c00ae92d00e155d5a1be1da8e2682f901b
Thu, 09 Nov 2017 17:14:25 +0000 Thu, 09 Nov 2017 17:14:25 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id 2d7b24537414f5e6f2981bf5f01596b19ea1abdb0eb4b81508fc7f44e8c34609
Thu, 09 Nov 2017 17:14:25 +0000 Thu, 09 Nov 2017 17:14:25 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id 2d7b24537414f5e6f2981bf5f01596b19ea1abdb0eb4b81508fc7f44e8c34609
Thu, 09 Nov 2017 17:14:35 +0000 Thu, 09 Nov 2017 17:14:35 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id fdae44c599b77d44839e4897b750203c183001a6053c926432ef5a3c7f4deb38
Thu, 09 Nov 2017 17:14:35 +0000 Thu, 09 Nov 2017 17:14:35 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id fdae44c599b77d44839e4897b750203c183001a6053c926432ef5a3c7f4deb38
Thu, 09 Nov 2017 17:14:45 +0000 Thu, 09 Nov 2017 17:14:45 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id 544351dda838d698e3bc125840edb6ad71cd0165a970cce46825df03b826eb38
Thu, 09 Nov 2017 17:14:45 +0000 Thu, 09 Nov 2017 17:14:45 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id 544351dda838d698e3bc125840edb6ad71cd0165a970cce46825df03b826eb38
Thu, 09 Nov 2017 17:14:55 +0000 Thu, 09 Nov 2017 17:14:55 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id 00fe4c286c1cc9b905c9c0927f82b39d45d41295a9dd0852131bba087bb19610
Thu, 09 Nov 2017 17:14:55 +0000 Thu, 09 Nov 2017 17:14:55 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id 00fe4c286c1cc9b905c9c0927f82b39d45d41295a9dd0852131bba087bb19610
Any help would be much appreciated!
Upvotes: 0
Views: 3281
Reputation: 79
For all the poor souls out there, who couldn't find out the answer, the reason for the pod to keep restarting is that the command executed by it has exited with code 0 (meaning successfully).
In my case, I was running /bin/bash as the entrypoint command, as specified in my pod configuration .yaml file:
apiVersion: v1
kind: Pod
metadata:
name: edge
spec:
containers:
- name: edge
image: "crossense/edge:production"
command:
- /bin/bash
securityContext:
capabilities:
add:
- NET_ADMIN
- SYS_MODULE
- NET_RAW
volumeMounts:
- name: ip-nonlocal-bind
mountPath: /host/proc/sys/net/ipv4
- name: dev
mountPath: /host/dev
- name: modules
mountPath: /host/lib/modules
....
The solution was simply adding a non exiting command to the entrypoint. This can be any process run on foreground or simply a /bin/sleep
For the sake of example and future learning, my final pod configuration file looked like this:
apiVersion: v1
kind: Pod
metadata:
name: edge
spec:
hostNetwork: true
containers:
- name: edge
image: "crossense/edge:production"
command: ["/bin/bash", "-c"]
args: ["service rsyslog restart; service proxysql start; service mongodb start; service pdns-recursor start; service supervisor start; service danted start; touch /var/run/squid.pid; chown proxy /var/run/squid.pid; service squid restart; service ipsec start; /sbin/iptables-restore < /etc/iptables/rules.v4; sleep infinity"]
securityContext:
privileged: true
capabilities:
add:
- NET_ADMIN
- SYS_MODULE
- NET_RAW
volumeMounts:
- mountPath: /dev/shm
name: dshm
- name: ip-nonlocal-bind
mountPath: /host/proc/sys/net/ipv4
- name: dev
mountPath: /dev
- name: modules
mountPath: /lib/modules
ports:
- containerPort: 80
- containerPort: 8080
- containerPort: 53
protocol: UDP
- containerPort: 5353
protocol: UDP
- containerPort: 5666
- containerPort: 4500
- containerPort: 500
- containerPort: 3306
volumes:
- name: dshm
emptyDir:
medium: Memory
- name: ip-nonlocal-bind
hostPath:
path: /proc/sys/net/ipv4
- name: dev
hostPath:
path: /dev
type: Directory
- name: modules
hostPath:
path: /lib/modules
type: Directory
For any questions, feel free to comment of this thread, or ask me at [email protected] :)
Upvotes: 1
Reputation: 4407
While I can't say this conclusively without the ability to re-produce and lack of logs, one of the differences which can be noticed easily is the privileges you have provided in docker command for example NET_ADMIN
or NET_RAW
etc. which are missing in Kubernetes run command.
Kubernetes also provides the ability to assign such privileges to a pod with capabilities
within the securityContext
in a pod declaration.
I am not sure if you can do this with Kubectl, but if you use the YAML declaration for the pod, the specs look roughly like:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myshell
image: "ubuntu:14.04"
command:
- /bin/sleep
- "300"
securityContext:
capabilities:
add:
- NET_ADMIN
For more reference, I would suggest a quick look at:
Upvotes: 1