Reputation: 4530
Follow this guide, I'm trying to start minikube
and forward port at the boot time.
My script:
#!/bin/bash
set -eux
export PATH=/usr/local/bin:$PATH
minikube status || minikube start
minikube ssh 'grep docker.for.mac.localhost /etc/hosts || echo -e "127.0.0.1\tdocker.for.mac.localhost" | sudo tee -a /etc/hosts'
minikube ssh 'test -f wait-for-it.sh || curl -O https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh'
minikube ssh 'chmod +x wait-for-it.sh && ./wait-for-it.sh 127.0.1.1:10250'
POD=$(kubectl get po --namespace kube-system | awk '/kube-registry-v0/ { print $1 }')
kubectl port-forward --namespace kube-system $POD 5000:5000
Everything works fine except that kubectl port-forward
said that pod does not exist at the first time running:
++ kubectl get po --namespace kube-system
++ awk '/kube-registry-v0/ { print $1 }'
+ POD=kube-registry-v0-qr2ml
+ kubectl port-forward --namespace kube-system kube-registry-v0-qr2ml 5000:5000
error: error upgrading connection: unable to upgrade connection: pod does not exist
If I re-run:
+ minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100
+ minikube ssh 'grep docker.for.mac.localhost /etc/hosts || echo -e "127.0.0.1\tdocker.for.mac.localhost" | sudo tee -a /etc/hosts'
127.0.0.1 docker.for.mac.localhost
+ minikube ssh 'test -f wait-for-it.sh || curl -O https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh'
+ minikube ssh 'chmod +x wait-for-it.sh && ./wait-for-it.sh 127.0.1.1:10250'
wait-for-it.sh: waiting 15 seconds for 127.0.1.1:10250
wait-for-it.sh: 127.0.1.1:10250 is available after 0 seconds
++ kubectl get po --namespace kube-system
++ awk '/kube-registry-v0/ { print $1 }'
+ POD=kube-registry-v0-qr2ml
+ kubectl port-forward --namespace kube-system kube-registry-v0-qr2ml 5000:5000
Forwarding from 127.0.0.1:5000 -> 5000
Forwarding from [::1]:5000 -> 5000
I added a debug line before forwarding:
kubectl describe pod --namespace kube-system $POD
and saw this:
+ POD=kube-registry-v0-qr2ml
+ kubectl describe pod --namespace kube-system kube-registry-v0-qr2ml
Name: kube-registry-v0-qr2ml
Namespace: kube-system
Node: minikube/192.168.99.100
Start Time: Thu, 28 Dec 2017 10:00:00 +0700
Labels: k8s-app=kube-registry
version=v0
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"kube-system","name":"kube-registry-v0","uid":"317ecc42-eb7b-11e7-a8ce-...
Status: Running
IP: 172.17.0.6
Controllers: ReplicationController/kube-registry-v0
Containers:
registry:
Container ID: docker://6e8f3f33399605758354f3f546996067d834459781235d51eef3ffa9c6589947
Image: registry:2.5.1
Image ID: docker-pullable://registry@sha256:946480a23b33480b8e7cdb89b82c1bd6accae91a8e66d017e21e8b56551f6209
Port: 5000/TCP
State: Running
Started: Thu, 28 Dec 2017 13:22:44 +0700
Why kubectl
said that it does not exist?
Fri Dec 29 04:58:06 +07 2017
Looking carefully at the events, I found something:
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
20m 20m 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succ
eeded for volume "image-store"
20m 20m 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succ
eeded for volume "default-token-fs7kr"
20m 20m 1 kubelet, minikube Normal SandboxChanged Pod sandbox changed, it will be killed and re-created.
20m 20m 1 kubelet, minikube spec.containers{registry} Normal Pulled Container image "registry:2.5.1" already present on machine
20m 20m 1 kubelet, minikube spec.containers{registry} Normal Created Created container
20m 20m 1 kubelet, minikube spec.containers{registry} Normal Started Started container
Pod sandbox changed, it will be killed and re-created.
Before:
Containers:
registry:
Container ID: docker://47c510dce00c6c2c29c9fe69665e1241c457d0666174a7723062c534e7229c58
Image: registry:2.5.1
Image ID: docker-pullable://registry@sha256:946480a23b33480b8e7cdb89b82c1bd6accae91a8e66d017e21e8b56551f6209
Port: 5000/TCP
State: Running
Started: Thu, 28 Dec 2017 13:47:02 +0700
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Thu, 28 Dec 2017 13:22:44 +0700
Finished: Thu, 28 Dec 2017 13:45:18 +0700
Ready: True
Restart Count: 14
After:
Containers:
registry:
Container ID: docker://3a7da784d3d596796111348757725f5af22b47c5edd0fc29a4ffbb84f3f08956
Image: registry:2.5.1
Image ID: docker-pullable://registry@sha256:946480a23b33480b8e7cdb89b82c1bd6accae91a8e66d017e21e8b56551f6209
Port: 5000/TCP
State: Running
Started: Thu, 28 Dec 2017 19:03:04 +0700
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Thu, 28 Dec 2017 13:47:02 +0700
Finished: Thu, 28 Dec 2017 19:00:48 +0700
Ready: True
Restart Count: 15
minikube logs:
Dec 28 22:15:41 minikube localkube[3250]: W1228 22:15:41.102038
3250 docker_sandbox.go:343] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/kube-registry-v0-qr2ml through plugin: invalid network status for
Upvotes: 2
Views: 4601
Reputation: 33203
POD=$(kubectl get po --namespace kube-system | awk '/kube-registry-v0/ { print $1 }')
Be aware that using a selector is almost certainly better than using text utilities, especially with "unstructured" output from kubectl
. I don't know of any promises they make about the format of the default output, which is why --output=json
and friends exist. However, in your case when you just want the name, there is a special --output=name
which does what it says, with the mild caveat that the Resource prefix will be in front of the name (pods/kube-registry-v0-qr2ml
in your case)
Separately, I see that you have "wait-for-it," but just because a port is accepting connections doesn't mean the Pod is Ready. You'll actually want to use --output=json
(or more awk
scripts, I guess) to ensure the Pod is both Running and Ready, with the latter status reached when kubernetes and the Pod agree that everything is cool.
I suspect, but would have to experiment to know for sure, that the error message is just misleading; it isn't truly that kubernetes doesn't know anything about your Pod, but merely that it couldn't port-forward to it in the state it's in.
You may also experience better success by creating a Service
of type: NodePort
and then talk to the Node's IP on the allocated port; that side-steps this kubectl-shell mess entirely, but does not side-step the Ready part -- only Pods in the Ready state will receive traffic from a Service
As a minor, pedantic note, --namespace
is an argument to kubectl
, and not to port-forward
, so the most correct invocation is kubectl --namespace=kube-system port-forward kube-registry-v0-qr2ml 5000:5000
to ensure the argument isn't mis-parsed
Upvotes: 6