jan.vogt
jan.vogt

Reputation: 1817

How to migrate ManagedCertificates from regional to zonal GKE cluster without down time

I am currently running a regional GKE cluster and want to migrate to a new zonal one. The old cluster has an Ingress Object with a public IP using Google Managed Certificates for HTTPS termination.

My migration plan is:

  1. Create new zonal cluster.
  2. kubectl apply -f clusterConfig.yaml.
  3. Move public IP to new cluster.

The big problem with this is, that the ManagedCertificates will need at least 15 min. after the IP is moved to become ready. This will render all services unavailable within this time period. Is there any way to use the old ManagedCertificates' keys in the new cluster until the new ManagedCertificates are ready?

Upvotes: 5

Views: 652

Answers (1)

Mr.KoopaKiller
Mr.KoopaKiller

Reputation: 3982

After some researches and tests in my lab account, I'm going to explain how you can reuse/reassign the current ManagedCertificate in many LoadBalancers.

As mentioned here:

When your domain resolves to IP addresses of multiple load balancers (multiple Ingresses), you should create a single ManagedCertificate resource and attach it to all the Ingresses. If you instead create many ManagedCertificate resources and attach each of them to a separate Ingress, the Certificate Authority may not be able to verify the ownership of your domain and some of your certificates may not be provisioned

I'm running a simple application in a Kubernetes 1.17.4 on a Regional cluster (old), and want to move to a new Zonal cluster using GKE kubernetes 1.17.5.

In the old cluster I've created a ManagedCertified and an ingress. In the new cluster i'm going to create only a ingress reusing the previous ManagedCertificate:

  1. New LoadBalancer IP

Let's start allocating a new IP address for the LoadBalancer

gcloud compute addresses create newip --global

Get the new IP with the following command:

gcloud compute addresses describe newip --global

Result:

address: 34.107.xxx.xxx
...
  1. Deploying the application

For this example I'm using a simple echo-server deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo
spec:
  selector:
    matchLabels:
      app: echo
  template:
    metadata:
      labels:
        app: echo
    spec:
      containers:
      - name: echo
        image: mendhak/http-https-echo
        ports:
        - name: http
          containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: echo-svc
spec:
  type: NodePort
  selector:
    app: echo
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  1. Creating the ingress

You need to get the value from the key ingress.gcp.kubernetes.io/pre-shared-cert from the old ingress, and configure the annotation kubernetes.io/ingress.global-static-ip-name with the new ip name's

You can use the command kubectl get ing old-ingress -oyaml to get the key on the previous cluster.

Why? This is explained here:

Managed Certificates communicate with Ingress using the kubernetes.io/pre-shared-cert annotation.

and here:

ingress.gcp.kubernetes.io/pre-shared-cert: Use this annotation to reference the certificates and keys

The final yaml will looks like this:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: my-new-ingress
  annotations:
    kubernetes.io/ingress.global-static-ip-name: newip #new ip name
    ingress.gcp.kubernetes.io/pre-shared-cert: mcrt-a798051f-a50d-4b38-84b1-xxxxxxxxxxxx # from the old ingress
spec:
  rules:
  - http:
      paths:
      - backend:
          serviceName: echo-svc
          servicePort: 80
        path: /

Apply the ingress spec and wait for the Load balancer provisioning... after few minutes verify if everything is ok using kubectl get ing and try to curl the ip (SSL will not match yet, because you are using the ip)

curl -IL -k 34.107.xxx.xxx

HTTP/2 200
x-powered-by: Express
content-type: application/json; charset=utf-8
content-length: 647
etag: W/"287-qCxPIULxqrMga5xHN8AAKMHsUi4"
date: Wed, 20 May 2020 11:49:14 GMT
via: 1.1 google
alt-svc: clear
  1. Changing DNS record

At this point we have a functional application with ingress using the SSL provisioned from the old cluster.

To move all traffice from the previous cluster to the new, you just need to change the DNS record using the new ip.

Depending what DNS provider you are using, you can create a new dns entry with the new ip and control the traffic using dns weight, round-robin etc...

References:

https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs

Upvotes: 7

Related Questions