smaikap
smaikap

Reputation: 522

Exposing service in (GKE) Kubernetes only with internal ip

TL;DR In a GKE private cluster, I'm unable to expose service with internal/private IP.

We have our deployment consisting of around 20 microservices and 4 monoliths, currently running entirely on VMs on GoogleCloud. I'm trying to move this infrastructure to GKE. The first step of the project is to build a private GKE cluster (i.e without any public IP) as a replacement of our staging. As this is staging, I need to expose all the microservice endpoints along with the monolith endpoints internally for debugging purpose (means, only to those connected to the VPC) and that is where I'm stuck. I tried 2 approaches:

  1. Put an internal load balancer (ILB) in front of each service and monolith. Example:
apiVersion: v1
kind: Service
metadata:
  name: session
  annotations:
    cloud.google.com/load-balancer-type: "Internal"
  labels:
    app: session
    type: ms
spec:
  type: LoadBalancer
  selector:
    app: session
  ports:
  - name: grpc
    port: 80
    targetPort: 80
    protocol: TCP

enter image description here

This works, though with severe limitation. ILB creates a forwarding rule, and GCP has a limitation of 75 forwarding rule per network. This means we can not build more than 3 clusters in a network. Not acceptable to us.

  1. a. I tried placing an ingress controller in front of all the services, which always exposes the entire cluster with a public IP - an absolute no-no.
apiVersion: extensions/v1beta1
kind: Ingress
hostNetwork: true
metadata:
  name: ingress-ms-lb
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: gs03
    http:
      paths:
      - path: /autodelivery/*
        backend:
          serviceName: autodelivery
          servicePort: 80
      - path: /session/*
        backend:
          serviceName: session
          servicePort: 80

b. I tried using a nginx ingress controller which ends up not having an ip at all.


apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-ms-lb
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    # cloud.google.com/load-balancer-type: "Internal"
    nginx.ingress.kubernetes.io/ingress.class: nginx
    kubernetes.io/ingress.class: "nginx"
    # nginx.ingress.kubernetes.io/whitelist-source-range: 10.100.0.0/16, 10.110.0.0/16
spec:
  rules:
  - host: svclb
    http:
      paths:
      - path: /autodelivery/*
        backend:
          serviceName: autodelivery
          servicePort: 80
      - path: /session/*
        backend:
          serviceName: session
          servicePort: 80

The third option is to configure firewall rules, which will cut off any access to the public IPs. This was rejected internally, given the security concerns.

I'm stuck at this stage and need some pointers to move forward. Please help

Upvotes: 0

Views: 1307

Answers (1)

amonaco
amonaco

Reputation: 70

I could see from the screenshot that you attached that your GKE cluster is a private cluster.

As you would like to reach your services and applications inside the GKE Cluster from all the resources in the same VPC Network, I would like to suggest you to use NodePort [1].

[1] https://cloud.google.com/kubernetes-engine/docs/concepts/service#service_of_type_nodeport

Upvotes: 1

Related Questions