Elvinas
Elvinas

Reputation: 91

Is it possible to access the Kubernetes API via https ingress?

I was trying unsuccessfully access Kubernetes API via HTTPS ingress and now started to wonder if that is possible at all?

Any working detailed guide for a direct remote access (without using ssh -> kubectl proxy to avoid user management on Kubernetes node) would be appreciated. :)

UPDATE:

Just to make more clear. This is bare metal on premise deployment (NO GCE, AWZ, Azure or any other) and there is intension that some environments will be totally offline (which will add additional issues with getting the install packages).

Intention is to be able to use kubectl on client host with authentication via Keycloak (which also fails if followed by the step by step instructions). Administrative access using SSH and then kubectl is not suitable fir client access. So it looks I will have to update firewall to expose API port and create NodePort service.

Setup:

[kubernetes - env] - [FW/SNAT] - [me]

FW/NAT allows only 22,80 and 443 port access

So as I set up an ingress on Kubernetes, I cannot create a firewall rule to redirect 443 to 6443. Seems the only option is creating an https ingress to point access to "api-kubernetes.node.lan" to kubernetes service port 6443. Ingress itself is working fine, I have created a working ingress for Keycloak auth application.

I have copied .kube/config from the master node to my machine and placed it into .kube/config (Cygwin environment)

What was attempted:

When trying to compare apiserver.pem and my generated cert I see the only difference:

apiserver.pem
X509v3 Key Usage:
                 Digital Signature, Non Repudiation, Key Encipherment
generated.crt
X509v3 Extended Key Usage:
                 TLS Web Server Authentication

Ingress configuration:

---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: kubernetes-api
  namespace: default  
  labels:
    app: kubernetes
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  tls:
    - secretName: kubernetes-api-cert
      hosts:    
        - api-kubernetes.node.lan

  rules:
  - host: api-kubernetes.node.lan
    http:
      paths:
      - path: "/"
        backend:
          serviceName: kubernetes
          servicePort: 6443

Links: [1] https://db-blog.web.cern.ch/blog/lukas-gedvilas/2018-02-creating-tls-certificates-using-kubernetes-api

Upvotes: 6

Views: 7668

Answers (4)

Id2ndR
Id2ndR

Reputation: 173

To use the certification based authentication, TLS passthrough should be enabled in the ingress. Refer to Unauthorized 401

Upvotes: 0

mikeLundquist
mikeLundquist

Reputation: 1013

For those coming here who just want to see their kubernetes API from another network and with another host-name but don't need to change the API to a port other than the default 6443, an ingress isn't necessary.

If this describes you, all you have to do is add additional SAN rules in your API's cert for the DNS you're coming from. This article describes the process in detail

Upvotes: 0

Elvinas
Elvinas

Reputation: 91

Partially answering my own question.

For the moment I am satisfied with token based auth: this allows to have separate access levels and avoid allowing shell users. Keycloak based dashboard auth worked, but after logging in, was not able to logout. There is no logout option. :D

And to access dashboard itself via Ingress I have found somewhere a working rewrite rule: nginx.ingress.kubernetes.io/configuration-snippet: "rewrite ^(/ui)$ $1/ui/ permanent;"

One note, that UI must be accessed with trailing slash "/": https://server_address/ui/

Upvotes: 0

Rico
Rico

Reputation: 61521

You should be able to do it as long as you expose the kube-apiserver pod in the kube-system namespace. I tried it like this:

$ kubectl -n kube-system expose pod kube-apiserver-xxxx --name=apiserver --port 6443
service/apiserver exposed
$ kubectl -n kube-system get svc
NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                       AGE
apiserver                 ClusterIP   10.x.x.x         <none>        6443/TCP                      1m
...

Then go to a cluster machine and point my ~/.kube/config context IP 10.x.x.x:6443

clusters:
- cluster:
    certificate-authority-data: [REDACTED]
    server: https://10.x.x.x:6443
  name: kubernetes
...

Then:

$ kubectl version --insecure-skip-tls-verify
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

I used --insecure-skip-tls-verify because 10.x.x.x needs to be valid on the server certificate. You can actually fix it like this: Configure AWS publicIP for a Master in Kubernetes

So maybe a couple of things in your case:

  1. Since you are initially serving SSL on the Ingress you need to use the same kubeapi-server certificates under /etc/kubernetes/pki/ on your master
  2. You need to add the external IP or name to the certificate where the Ingress is exposed. Follow something like this: Configure AWS publicIP for a Master in Kubernetes

Upvotes: 1

Related Questions