potatoxchip
potatoxchip

Reputation: 565

ingress-nginx connects from outside minikube, but connection is refused from inside minikube

I am trying to access my ingress-nginx service from a service but it gives connection refused. Here is my ingress

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress-service
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/use-regex: "true"
spec:
  rules:
    - host: ticketing.dev
      http:
        paths:
          - path: /api/users/?(.*)
            backend:
              serviceName: auth-srv
              servicePort: 3000
          - path: /api/tickets/?(.*)
            backend:
              serviceName: tickets-srv
              servicePort: 3000
          - path: /?(.*)
            backend:
              serviceName: client-srv
              servicePort: 3000
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
---
kind: Service
apiVersion: v1
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  externalTrafficPolicy: Local
  type: LoadBalancer
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
    - name: http
      port: 443
      protocol: TCP
      targetPort: https
❯ kubectl get services -n ingress-nginx
NAME            TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)                      AGE
ingress-nginx   LoadBalancer   10.101.124.218   10.101.124.218   80:30634/TCP,443:30179/TCP   15m

The ingress-nginx is running on namespace ingress-nginx. So it should be accessible by http://ingress-nginx.ingress-nginx.svc.cluster.local. But when I access it, it says connection refused 10.101.124.218:80. I am able to access the ingress from outside, i.e. from the ingress ip.

I am using minikube and used ingress by running minikube addons enable ingress. Yes and im running the tunnel by minikube tunnel

Upvotes: 5

Views: 8898

Answers (1)

Will R.O.F.
Will R.O.F.

Reputation: 4128

I tested your environment and found the same behavior, external access but internally getting connection refused, this is how I solved:

  • The Minikube Ingress Addon deploys the controller in kube-system namespace. If you try to deploy the service in a newly created namespace, it will not reach the deployment in kube-system namespace.
  • It's easy to mix those concepts because the default nginx-ingress deployment uses the namespace ingress-nginx as you were trying.
  • Another issue I found, is that your service does not have all selectors assigned to the controller deployment.

  • The easiest way to make your deployment work, is to run kubectl expose on the nginx controller:

kubectl expose deployment ingress-nginx-controller --target-port=80 --type=NodePort -n kube-system
  • Using this command to create the nginx-ingress-controller service, all communications were working, both external and internal.

Reproduction:

  • For this example I'm using only two ingress backends to avoid being much repetitive in my explanation.
  • Using minikube 1.11.0
  • Enabled ingress and metallb addons.
  • Deployed two hello apps: v1 and v2, both pods listens on port 8080 and are exposed as node port as follows:
$ kubectl get services
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
hello1-svc   NodePort    10.110.211.119   <none>        8080:31243/TCP   95m
hello2-svc   NodePort    10.96.9.66       <none>        8080:31316/TCP   93m
  • Here is the ingress file, just like yours, just changed the backend services names and ports to match my deployed ones:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress-service
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/use-regex: "true"
spec:
  rules:
    - host: ticketing.dev
      http:
        paths:
          - path: /api/users/?(.*)
            backend:
              serviceName: hello1-svc
              servicePort: 8080
          - path: /?(.*)
            backend:
              serviceName: hello2-svc
              servicePort: 8080
  • Now I'll create the nginx-ingress service exposing the controller deployment, this way all tags and settings will be inherited:
$ kubectl expose deployment ingress-nginx-controller --target-port=80 --type=NodeP
ort -n kube-system
service/ingress-nginx-controller exposed
  • Now we deploy the ingress object:
$ kubectl apply -f ingress.yaml 
ingress.networking.k8s.io/ingress-service created

$ kubectl get ingress
NAME              CLASS    HOSTS           ADDRESS      PORTS   AGE
ingress-service   <none>   ticketing.dev   172.17.0.4   80      56s

$ minikube ip
172.17.0.4
  • Testing the ingress from the outside:
$ tail -n 1 /etc/hosts
172.17.0.4 ticketing.dev

$ curl http://ticketing.dev/?foo
Hello, world!
Version: 2.0.0
Hostname: hello2-67bbbf98bb-s78c4

$ curl http://ticketing.dev/api/users/?foo
Hello, world!
Version: 1.0.0
Hostname: hello-576585fb5f-67ph5
  • Then I deployed a alpine pod to test the access from inside the cluster:
$ kubectl run --generator=run-pod/v1 -it alpine --image=alpine -- /bin/sh
/ # nslookup ingress-nginx-controller.kube-system.svc.cluster.local
Server:         10.96.0.10
Address:        10.96.0.10:53

Name:   ingress-nginx-controller.kube-system.svc.cluster.local
Address: 10.98.167.112

/ # apk update
/ # apk add curl

/ # curl -H "Host: ticketing.dev" ingress-nginx-controller.kube-system.svc.cluster.local/?foo
Hello, world!
Version: 2.0.0
Hostname: hello2-67bbbf98bb-s78c4

/ # curl -H "Host: ticketing.dev" ingress-nginx-controller.kube-system.svc.cluster.local/api/users/?foo
Hello, world!
Version: 1.0.0
Hostname: hello-576585fb5f-67ph5

As you can see, all requests were fulfilled.


Note:

  • As pointed by @suren, when curling ingress, I had to specify the host with -H

  • The service name needs to be fully FQDN because we are dealing with a service hosted in another namespace, using the format <SVC_NAME>.<NAMESPACE>.svc.cluster.local.

  • In your JS app, you will have to pass the Host argument in order to reach the ingress.

If you have any question let me know in the comments.

Upvotes: 6

Related Questions