francois_halbach
francois_halbach

Reputation: 65

Why does attempting to connect to my ingress show connection refused?

I'm running kubernetes 1-21-0 on Centos7. I've set up a keycloak service to test my ingress controller and am able to access the keycloak on the host url with the keycloak port like myurl.com:30872. These are my running services:

NAMESPACE       NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
default         keycloak                             NodePort    10.96.11.164    <none>        8080:30872/TCP               21h
default         kubernetes                           ClusterIP   10.96.0.1       <none>        443/TCP                      11d
ingress-nginx   ingress-nginx-controller             NodePort    10.102.201.24   <none>        80:31110/TCP,443:30566/TCP   9m45s
ingress-nginx   ingress-nginx-controller-admission   ClusterIP   10.107.90.207   <none>        80/TCP,443/TCP               9m45s
kube-system     kube-dns                             ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP       11d

I've deployed the following nginx ingress controller.

And added an HTTP webhook to the service:

# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.23.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.44.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  type: ClusterIP
  ports:
    - name: http-webhook
      port: 80
      targetPort: webhook
    - name: https-webhook
      port: 443
      targetPort: webhook
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller

With this ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: keycloak
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
    - http:
        paths:
        - path: /keycloak
          pathType: Prefix
          backend:
            service:
              name: keycloak
              port: 
                number: 8080

Now when I attempt to connect to the keycloak service through the ingress I go to myurl.com/keycloak but it's unable to connect and trying to curl it from within my control node shows connection refused:

# curl -I http://127.0.0.1/keycloak
curl: (7) Failed connect to 127.0.0.1:80; Connection refused

Can someone see what I'm missing?

Edit:

I realized the ingress controller actually works, but I need to specify its port also to reach it like this:

curl -I http://127.0.0.1:31110/keycloak

Which I'd like to avoid.

Upvotes: 2

Views: 12932

Answers (1)

moonkotte
moonkotte

Reputation: 4181

You have to specify 31110 port because your nginx ingress is set up with NodePort which means kubernetes listens to this port and all traffic that goes here is redirected to nginx-ingress-controller pod.

Depending on your setup and goals, this can be achieved differently.

Option 1 - for testing purposes only and without any changes in setup. Works only on a control plane where nginx-ingress-controller pod is running

it's possible to forward traffic from outside port 80 to nginx-ingress-controller pod directly port 80. You can run this command (in background):

sudo kubectl port-forward ingress-nginx-controller-xxxxxxxx-yyyyy 80:80 -n ingress-nginx &

Curl test shows that it's working:

curl -I localhost/keycloak
Handling connection for 80
HTTP/1.1 200 OK
Date: Wed, 16 Jun 2021 13:19:23 GMT

Curl can be run on different instance, in this case command will look this way without specifying any ports:

curl -I public_ip/keycloak

Option 2 - this one is a bit more difficult, however provides better results.

It's possible to expose pods outside of the cluster. Feature is called hostPort - it allows to expose a single container port on the host IP. To have this work on different worker nodes, ingress-nginx-controller should be deployed as DaemonSet.

Below parts in values.yaml for ingress-nginx helm chart that I corrected:

hostPort -> enabled -> true

  ## Use host ports 80 and 443
  ## Disabled by default
  ##
  hostPort:
    enabled: true
    ports:
      http: 80
      https: 443

kind -> DaemonSet

  ## DaemonSet or Deployment
  ##
  kind: DaemonSet

Then install ingress-nginx-controller from this chart. What it does is by default ingress-nginx-controller pods will listen to traffic on 80 and 443 port. Which confirms with simple test:

curl -I localhost/keycloak
HTTP/1.1 200 OK
Date: Wed, 16 Jun 2021 13:31:25 GMT

Option 3 - may be considered as well if ingress-nginx is installed with LoadBalancer type.

Use metallb - software loadbalancer specifically designed for bare metal clusters. How to install metallb and configure

Once it's done and ingress-nginx is deployed, ingress-nginx will get External-IP:

kubectl get svc --all-namespaces

NAMESPACE       NAME                                 TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                      AGE
ingress-nginx   ingress-nginx-controller             LoadBalancer   10.102.135.146   192.168.1.240   80:32400/TCP,443:32206/TCP   43s

Testing this again with curl:

curl -I 192.168.1.240/keycloak
HTTP/1.1 200 OK
Date: Wed, 16 Jun 2021 13:55:34 GMT

More information about topics above:

Upvotes: 4

Related Questions