Reputation: 10136
I need to expose a service in GKE via internal load-balancer (service can terminate TLS on its own).
There is a clear guideline how to do that, all works as expected except for one thing. The LB that gets automatically created is configured with HTTP health-check at hard-coded path /healthz
, however the service implements its health-check at a different path. As a result the load-balancer never "sees" the backing instance-groups as healthy.
Is there a way to provide a custom health-check config to an internal TCP load-balancer in GKE?
Just for the context: I tried to follow the approach described in another guide on configuring ingress features (by creating a backend-config and annotating the service accordingly), but unfortunately that does not work for TCP load-balancer (while it does if I try to deploy HTTP load-balancer with an ingress resource).
Upvotes: 1
Views: 1865
Reputation: 93
I have bumped into the same issue, but I found the answer in the official GKE documentation Load balancer health checks
The externalTrafficPolicy of the Service defines how the load balancer's health check operates. In all cases, the load balancer's health check probers send packets to the kube-proxy software running on each node. The load balancer's health check is a proxy for information that the kube-proxy gathers, such as whether a Pod exists, is running, and has passed its readiness probe. Health checks for LoadBalancer Services cannot be routed to serving Pods. The load balancer's health check is designed to direct new TCP connections to nodes.
externalTrafficPolicy: Cluster
All nodes of the cluster pass the health check regardless of whether the node is running a serving Pod.
externalTrafficPolicy: Local
Only the nodes with at least one ready, serving Pod pass the health check. Nodes without a serving Pod and nodes with serving Pods that have not yet passed their readiness probes fail the health check.
If the serving Pod has failed its readiness probe or is about to terminate, a node might still pass the load balancer's health check even though it does not contain a ready and serving Pod. This situation happens when the load balancer's health check has not yet reached its failure threshold. How the packet is processed in this situation depends on the GKE version.
I hope you will find it helpful.
Upvotes: 1
Reputation: 928
Yes, you can edit an existing health check or define a new one.You can create a health check using the Cloud Console, the Google Cloud CLI, or the REST APIs. Refer this documentation for more information on creating a health check with TCP protocol.
Unlike a proxy load balancer, an internal TCP/UDP load balancer doesn't terminate connections from clients and then open new connections to backends. Instead, an internal TCP/UDP load balancer routes connections directly from clients to the healthy backends, without any interruption.
The protocol of the health check does not have to match the protocol of the load balancer, Regardless of the type of health check that you create, Google Cloud sends health check probes to the IP address of the internal TCP/UDP load balancer's forwarding rule, to the network interface in the VPC selected by the load balancer's backend service.
Note : The internal TCP/UDP load balancers use health check status to determine how to route new connections, as described in Traffic distribution.
The way that an internal TCP/UDP load balancer distributes new connections depends on whether you have configured failover:
Upvotes: 1