PrestonDocks
PrestonDocks

Reputation: 5418

kubectl works on server but not in container

I am running a gitlab pipeline to deploy code to kubernetes.

When my deploy script runs on the gitlab-runner I get the following error

Unable to connect to the server: dial tcp 192.168.10.50:443: i/o timeout

The deploy script is running in a docker container dtzar/helm-kubectl. Here is the actual deploy script

....

variables:
  KUBECONFIG: /root/.kube/config

deploy:
  image: dtzar/helm-kubectl
  stage: deploy
  only:
    - tags
  before_script:
    - mkdir -p /root/.kube/ && touch /root/.kube/config
    - echo ${KUBERNETES_KUBE_CONFIG} | base64 -d > ${KUBECONFIG}
    - kubectl get pods
  script:
    - kubectl apply -f kubernetes/deployment.yaml

If I log in to the vm running the gitlab-runner I can successfully run kubectl and apply the deployment manually.

I was wondering if this is an ssl certificate issue, as the cluster is using self-signed certificates generated by rancher, although I don't understand in that case, why running kubectl manually on the gitlab-runner host works.

update If I run the following kubectl command in dtzar/helm-kubectl

kubectl get pods -v7

I get the following output

kubectl get pods -v7
I0510 21:46:47.726119      15 loader.go:372] Config loaded from file:  /root/.kube/config
I0510 21:46:47.727408      15 round_trippers.go:432] GET https://192.168.10.50/k8s/clusters/c-rkqlj/api?timeout=32s
I0510 21:46:47.727590      15 round_trippers.go:438] Request Headers:
I0510 21:46:47.727690      15 round_trippers.go:442]     User-Agent: kubectl/v1.21.0 (linux/amd64) kubernetes/cb303e6
I0510 21:46:47.727774      15 round_trippers.go:442]     Accept: application/json, */*
I0510 21:46:47.727845      15 round_trippers.go:442]     Authorization: Bearer <masked>
I0510 21:47:17.730496      15 round_trippers.go:457] Response Status:  in 30002 milliseconds
I0510 21:47:17.730937      15 cached_discovery.go:121] skipped caching discovery info due to Get "https://192.168.10.50/k8s/clusters/c-rkqlj/api?timeout=32s": dial tcp 192.168.10.50:443: i/o timeout
I0510 21:47:17.731421      15 round_trippers.go:432] GET https://192.168.10.50/k8s/clusters/c-rkqlj/api?timeout=32s
I0510 21:47:17.731582      15 round_trippers.go:438] Request Headers:
I0510 21:47:17.731719      15 round_trippers.go:442]     Accept: application/json, */*
I0510 21:47:17.731810      15 round_trippers.go:442]     User-Agent: kubectl/v1.21.0 (linux/amd64) kubernetes/cb303e6
I0510 21:47:17.731909      15 round_trippers.go:442]     Authorization: Bearer <masked>
I0510 21:47:47.732971      15 round_trippers.go:457] Response Status:  in 30000 milliseconds
I0510 21:47:47.733296      15 cached_discovery.go:121] skipped caching discovery info due to Get "https://192.168.10.50/k8s/clusters/c-rkqlj/api?timeout=32s": dial tcp 192.168.10.50:443: i/o timeout
I0510 21:47:47.733429      15 shortcut.go:89] Error loading discovery information: Get "https://192.168.10.50/k8s/clusters/c-rkqlj/api?timeout=32s": dial tcp 192.168.10.50:443: i/o timeout
I0510 21:47:47.733595      15 round_trippers.go:432] GET https://192.168.10.50/k8s/clusters/c-rkqlj/api?timeout=32s
I0510 21:47:47.733700      15 round_trippers.go:438] Request Headers:
I0510 21:47:47.733817      15 round_trippers.go:442]     Accept: application/json, */*
I0510 21:47:47.733888      15 round_trippers.go:442]     User-Agent: kubectl/v1.21.0 (linux/amd64) kubernetes/cb303e6
I0510 21:47:47.733974      15 round_trippers.go:442]     Authorization: Bearer <masked>
I0510 21:48:17.735311      15 round_trippers.go:457] Response Status:  in 30001 milliseconds
I0510 21:48:17.735630      15 cached_discovery.go:121] skipped caching discovery info due to Get "https://192.168.10.50/k8s/clusters/c-rkqlj/api?timeout=32s": dial tcp 192.168.10.50:443: i/o timeout
I0510 21:48:17.735900      15 round_trippers.go:432] GET https://192.168.10.50/k8s/clusters/c-rkqlj/api?timeout=32s
I0510 21:48:17.736018      15 round_trippers.go:438] Request Headers:
I0510 21:48:17.736097      15 round_trippers.go:442]     Accept: application/json, */*
I0510 21:48:17.736176      15 round_trippers.go:442]     User-Agent: kubectl/v1.21.0 (linux/amd64) kubernetes/cb303e6
I0510 21:48:17.736244      15 round_trippers.go:442]     Authorization: Bearer <masked>
I0510 21:48:47.736563      15 round_trippers.go:457] Response Status:  in 30000 milliseconds
I0510 21:48:47.736625      15 cached_discovery.go:121] skipped caching discovery info due to Get "https://192.168.10.50/k8s/clusters/c-rkqlj/api?timeout=32s": dial tcp 192.168.10.50:443: i/o timeout
I0510 21:48:47.737125      15 round_trippers.go:432] GET https://192.168.10.50/k8s/clusters/c-rkqlj/api?timeout=32s
I0510 21:48:47.737143      15 round_trippers.go:438] Request Headers:
I0510 21:48:47.737149      15 round_trippers.go:442]     Accept: application/json, */*
I0510 21:48:47.737154      15 round_trippers.go:442]     User-Agent: kubectl/v1.21.0 (linux/amd64) kubernetes/cb303e6
I0510 21:48:47.737160      15 round_trippers.go:442]     Authorization: Bearer <masked>
I0510 21:49:17.737909      15 round_trippers.go:457] Response Status:  in 30000 milliseconds
I0510 21:49:17.738305      15 cached_discovery.go:121] skipped caching discovery info due to Get "https://192.168.10.50/k8s/clusters/c-rkqlj/api?timeout=32s": dial tcp 192.168.10.50:443: i/o timeout
I0510 21:49:17.738517      15 helpers.go:234] Connection error: Get https://192.168.10.50/k8s/clusters/c-rkqlj/api?timeout=32s: dial tcp 192.168.10.50:443: i/o timeout
F0510 21:49:17.738648      15 helpers.go:115] Unable to connect to the server: dial tcp 192.168.10.50:443: i/o timeout
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00000e001, 0xc0005eb200, 0x77, 0x1e9)
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1021 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x30584c0, 0xc000000003, 0x0, 0x0, 0xc00010a4d0, 0x25f6190, 0xa, 0x73, 0x40e300)
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:970 +0x191
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x30584c0, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc000474230, 0x1, 0x1)
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:733 +0x16f
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1495
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc0003840f0, 0x48, 0x1)
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x288
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x207fca0, 0xc0001a05d0, 0x1f064f0)
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:188 +0x935
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc000580b00, 0xc00001b8a0, 0x1, 0x2)
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get/get.go:167 +0x159
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000580b00, 0xc00001b880, 0x2, 0x2, 0xc000580b00, 0xc00001b880)
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc00018a000, 0xc000082180, 0xc00003a080, 0x4)
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
main.main()
    _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:49 +0x21d
goroutine 6 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x30584c0)
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1164 +0x8b
created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:418 +0xdf
goroutine 8 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x1f06410, 0x207e160, 0xc000998cf0, 0x1, 0xc000048ba0)
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x118
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x1f06410, 0x12a05f200, 0x0, 0x1, 0xc000048ba0)
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0x1f06410, 0x12a05f200, 0xc000048ba0)
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs.InitLogs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs/logs.go:51 +0x96
goroutine 28 [runnable]:
context.WithDeadline.func2()
    /usr/local/go/src/context/context.go:451
created by time.goFunc
    /usr/local/go/src/time/sleep.go:180 +0x45
goroutine 25 [runnable]:
net/http.setRequestCancel.func4(0x0, 0xc0001a07e0, 0xc0002f3f40, 0xc00035a00c, 0xc00078c060)
    /usr/local/go/src/net/http/client.go:397 +0x96
created by net/http.setRequestCancel
    /usr/local/go/src/net/http/client.go:396 +0x337

Upvotes: 0

Views: 392

Answers (1)

Rakesh Gupta
Rakesh Gupta

Reputation: 3750

Apparently, your kubeconfig is using master node's private IP - 192.168.10.50, which cant be reached from gitlab.

Assign a floating/public IP to the master node and use that in your kubeconfig.

Upvotes: 0

Related Questions