Reputation: 659
I am trying to deploy my application into Rancher managed kubernetes cluster RKE. I have created pipeline in gitlab using auto devops. But when the helm chart is trying to deploy I get this error.
Error: Kubernetes cluster unreachable: Get "http://localhost:8080/version?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
Below is my deploy script:
deploy:
stage: deploy
image: cdrx/rancher-gitlab-deploy
only:
- master
script:
- apk --no-cache add curl
- curl -L https://get.helm.sh/helm-v3.3.0-rc.1-linux-amd64.tar.gz > helm.tar.gz
- tar -zxvf helm.tar.gz
- mv linux-amd64/helm /usr/local/bin/helm
- helm install mychart ./mychart
Upvotes: 47
Views: 143307
Reputation: 1
make sure you are using the latest versions. I faced the same problem. I solved it by updating docker.
Upvotes: 0
Reputation: 1430
I found this page when looking for a problem: Kubernetes cluster unreachable
. In my case I came across the error:
Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "https://37...
And it turned out that I just forgotten to run minikube cluster :)
minikube start
Upvotes: 2
Reputation: 4875
If your microk8s setup is running on Windows 11 and you are calling Helm from a local CMD- or Powershell-Console, running
microk8s kubectl config view --raw > %USERPROFILE%/.kube/config
as per the documentation, will add the following entry to your kube config:
clusters:
- cluster:
certificate-authority-data: ...
server: https://127.0.0.1:16443
name: microk8s-cluster
From a Windows point of view, there is no listener for port 16443 on localhost. Instead, use the IP address returned from the following command, as your server address:
microk8s kubectl describe node | FIND "InternalIP"
Once you update your kube config file like this, your Helm calls should work as well.
Upvotes: 0
Reputation: 1553
I had a similar error. A bit of background context: I was working with multiple cluster and by mistake, I edited the .kube/config
manually. This resulted in an invalid configuration with the context.cluster
, context.user
parameters missing. I filled in those values manually and it worked again.
Before fixing, the config
file had a portion like this:
contexts:
- context:
cluster: ""
user: ""
name: ""
I updated it as
contexts:
- context:
cluster: <NAME-OF-THE-CLUSTER>
user: <USERNAME>
name: <CONTEXT-NAME>
To update the values, I used values from kubectl config get-contexts
(I had the output of above command in terminal history which helped in updating).
Upvotes: 0
Reputation: 113
I just had the same issue. So this happens because you are non root user,
sudo su
then execute export and all other commands
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.7.1
Upvotes: 8
Reputation: 61
If the following command doesn't work
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
you can try to use the root
user to install k3s & helm.
Upvotes: 6
Reputation: 614
Some good answers here specifying how to fix the problem. Here's a passage from the excellent O'Reilly book "Learning Helm" that gives insight into why this is error is happening:
"Working with Kubernetes Clusters Helm interacts directly with the Kubernetes API server. For that reason, Helm needs to be able to connect to a Kubernetes cluster. Helm attempts to do this automatically by reading the same configuration files used by kubectl (the main Kubernetes command-line client).
Helm will try to find this information by reading the environment variable $KUBECONFIG. If that is not set, it will look in the same default locations that kubectl looks in (for example, $HOME/.kube/config on UNIX, Linux, and macOS).
You can also override these settings with environment variables (HELM_KUBECONTEXT) and command-line flags (--kube-context). You can see a list of environment variables and flags by running helm help. The Helm maintainers recommend using kubectl to manage your Kubernetes credentials and letting Helm merely autodetect these settings. If you have not yet installed kubectl, the best place to start is with the official Kubernetes installation documentation."
-Learning Helm by Matt Butcher, Matt Farina, and Josh Dolitsky (O’Reilly). Copyright 2021 Matt Butcher, Innovating Tomorrow, and Blood Orange, 978-1-492-08365-8.
Upvotes: 5
Reputation: 588
This anwser solved the issue for me. If you're not running on microk8s, like me, omit the prefix
[microk8s] kubectl config view --raw > ~/.kube/config
Upvotes: 36
Reputation: 1214
I've bumped into the same issue when installing rancher on K3s, setting KUBECONFIG helped.
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
Upvotes: 67