Reputation: 1672
I'm working on a deployment in GKE that is my first one, so I'm pretty new to the concepts, but I understand where they're going with the tools, just need the experience to be confident.
First, I have a cluster that has about five services, two of which I want to expose via external load balancer. I've defined an annotation for Gcloud to set these up under load balancing, and that seems to be working, I've also setup an annotation to setup a network endpoint groups for the services. Here's how one is configured as in the deployment and service manifests.
---
#api-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f ./docker-compose.yml
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: api
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f ./docker-compose.yml
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: api
spec:
containers:
- args:
- bash
- -c
- node src/server.js
env:
- name: NODE_ENV
value: production
- name: TZ
value: America/New_York
image: gcr.io/<PROJECT_ID>/api
imagePullPolicy: Always
name: api
ports:
- containerPort: 8087
resources: {}
restartPolicy: Always
serviceAccountName: ""
status: {}
---
#api-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/load-balancer-type: "Internal"
cloud.google.com/neg: '{"ingress": true}'
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
type: LoadBalancer
ports:
- name: "8087"
port: 8087
targetPort: 8087
status:
loadBalancer: {}
I think I may be missing some kind of configuration here, but I'm unsure.
I've also seen where I can define Liveness checks in the yaml by adding
livenessProbe:
httpGet:
path: /healthz
port: 8080
I also have my ingress configured like this:
---
# master-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: master-application-ingress
annotations:
ingress.kubernetes.io/secure-backends: "true"
spec:
rules:
- http:
paths:
- path: /api
backend:
serviceName: api
servicePort: 8087
- http:
paths:
- path: /ui
backend:
serviceName: ui
servicePort: 80
and I've seen it where it just needs the port, for TCP checks, but I've already defined these in my application, and in the load balancer. I guess I want to know where I should be defining these checks.
Also, I have an issue with the NEG's created by the annotation being empty, or is this normal with manifest created NEG's?
Upvotes: 3
Views: 3872
Reputation: 9685
Now you can also create BackendConfig
as a separate Kubernetes declaration.
My example:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: cms-backend-config
namespace: prod
spec:
healthCheck:
checkIntervalSec: 60
port: 80
type: HTTP #case-sensitive
requestPath: /your-healthcheck-path
connectionDraining:
drainingTimeoutSec: 60
I don't have any readiness/liveness probes at all defined explicitly and everything works. I also noticed there are still glitches between GKE and the rest of GCP sometimes. I remember needing to re-create both my deployments and ingress from scratch at some point after I played around with different options for quite a while.
What I also did, and that might have been the main reason I started seeing endpoints in the automatically registered NEGs, is added a default backend to ingress to not have a separate default registered with Load Balancer:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: prod-ingress
namespace: prod
annotations:
kubernetes.io/ingress.allow-http: "false"
kubernetes.io/ingress.global-static-ip-name: load-balancer-ip
networking.gke.io/managed-certificates: my-certificate
spec:
backend:
serviceName: my-service
servicePort: 80
rules:
- host: "example.com"
http:
paths:
- path: /
backend:
serviceName: my-service
servicePort: 80
Upvotes: 0
Reputation: 4899
The health check is created based on your readinessProbe, not livenessProbe. Make sure to have a readinessProbe configured in your pod spec before creating the ingress resource.
As for the empty NEG, this might be due to a mismatch of the Health Check. The NEG will rely on the readiness gate feature (explained here), since you only have the livenessProbe defined, it is entirely possible the health check is misconfigured and thus failing.
You should also have an internal IP for the internal LB you created, can you reach the pods that way? If both are failing, the Health Check is likely the issue since the NEG is not adding pods to the group that it sees as not ready
Upvotes: 3