Jimmy
Jimmy

Reputation: 37101

Kubernetes services for different application tracks

I'm working on an application deployment environment using Kubernetes where I want to be able to spin up copies of my entire application stack based on a Git reference for the primarily web application, e.g. "master" and "my-topic-branch". I want these copies of the app stack to coexist in the same cluster. I can create Kubernetes services, replication controllers, and pods that use a "gitRef" label to isolate the stacks from each other, but some of the pods in the stack depend on each other (via Kubernetes services), and I don't see an easy, clean way to restrict the services that are exposed to a pod.

There a couple ways to achieve it that I can think of, but neither are ideal:

Essentially what I want is a way of telling Kubernetes, "Expose this service's hostname only to pods with the given labels, and expose it to them with the given hostname" for a specific service. This would allow me to use the second approach without having to have application-level logic for determining the correct hostname to use.

What's the best way to achieve what I'm after?

[†] http://kubernetes.io/v1.1/docs/user-guide/namespaces.html

Upvotes: 4

Views: 1451

Answers (1)

Christian Stewart
Christian Stewart

Reputation: 15519

The documentation on putting different versions in different namespaces is a bit incorrect I think. It is actually the point of namespaces to separate things completely like this. You should put a complete version of each "track" or deployment stage of your app into its own namespace.

You can then use hardcoded service names - "http://myservice/" - as the DNS will resolve on default to the local namespace.

For ingresses I have copied my answer here from the GitHub issue on cross-namespace ingresses.

You should use the approach that our group is using for Ingresses.

Think of an Ingress not as much as a LoadBalancer but just a document specifying some mappings between URLs and services within the same namespace.

An example, from a real document we use:

  apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    name: ingress
    namespace: dev-1
  spec:
    rules:
    - host: api-gateway-dev-1.faceit.com
      http:
        paths:
        - backend:
            serviceName: api-gateway
            servicePort: 80
          path: /
    - host: api-shop-dev-1.faceit.com
      http:
        paths:
        - backend:
            serviceName: api-shop
            servicePort: 80
          path: /
    - host: api-search-dev-1.faceit.com
      http:
        paths:
        - backend:
            serviceName: api-search
            servicePort: 8080
          path: /
    tls:
    - hosts:
      - api-gateway-dev-1.faceit.com
      - api-search-dev-1.faceit.com
      - api-shop-dev-1.faceit.com
      secretName: faceitssl

We make one of these for each of our namespaces for each track.

Then, we have a single namespace with an Ingress Controller which runs automatically configured NGINX pods. Another AWS Load balancer points to these pods which run on a NodePort using a DaemonSet to run at most and at least one on every node in our cluster.

As such, the traffic is then routed:

Internet -> AWS ELB -> NGINX (on node) -> Pod

We keep the isolation between namespaces while using Ingresses as they were intended. It's not correct or even sensible to use one ingress to hit multiple namespaces. It just doesn't make sense, given how they are designed. The solution is to use one ingress per each namespace with a cluster-scope ingress controller which actually does the routing.

All an Ingress is to Kubernetes is an object with some data on it. It's up to the Ingress Controller to do the routing.

See the document here for more info on Ingress Controllers.

Upvotes: 9

Related Questions