0x416e746f6e
0x416e746f6e

Reputation: 10136

Custom health-check config for GKE internal TCP load-balancer

I need to expose a service in GKE via internal load-balancer (service can terminate TLS on its own).

There is a clear guideline how to do that, all works as expected except for one thing. The LB that gets automatically created is configured with HTTP health-check at hard-coded path /healthz, however the service implements its health-check at a different path. As a result the load-balancer never "sees" the backing instance-groups as healthy.

Is there a way to provide a custom health-check config to an internal TCP load-balancer in GKE?

Just for the context: I tried to follow the approach described in another guide on configuring ingress features (by creating a backend-config and annotating the service accordingly), but unfortunately that does not work for TCP load-balancer (while it does if I try to deploy HTTP load-balancer with an ingress resource).

Upvotes: 1

Views: 1865

Answers (2)

Milan Ilic
Milan Ilic

Reputation: 93

I have bumped into the same issue, but I found the answer in the official GKE documentation Load balancer health checks

The externalTrafficPolicy of the Service defines how the load balancer's health check operates. In all cases, the load balancer's health check probers send packets to the kube-proxy software running on each node. The load balancer's health check is a proxy for information that the kube-proxy gathers, such as whether a Pod exists, is running, and has passed its readiness probe. Health checks for LoadBalancer Services cannot be routed to serving Pods. The load balancer's health check is designed to direct new TCP connections to nodes.

externalTrafficPolicy: Cluster

All nodes of the cluster pass the health check regardless of whether the node is running a serving Pod.

externalTrafficPolicy: Local

Only the nodes with at least one ready, serving Pod pass the health check. Nodes without a serving Pod and nodes with serving Pods that have not yet passed their readiness probes fail the health check.

If the serving Pod has failed its readiness probe or is about to terminate, a node might still pass the load balancer's health check even though it does not contain a ready and serving Pod. This situation happens when the load balancer's health check has not yet reached its failure threshold. How the packet is processed in this situation depends on the GKE version.

I hope you will find it helpful.

Upvotes: 1

Goli Nikitha
Goli Nikitha

Reputation: 928

Yes, you can edit an existing health check or define a new one.You can create a health check using the Cloud Console, the Google Cloud CLI, or the REST APIs. Refer this documentation for more information on creating a health check with TCP protocol.

Unlike a proxy load balancer, an internal TCP/UDP load balancer doesn't terminate connections from clients and then open new connections to backends. Instead, an internal TCP/UDP load balancer routes connections directly from clients to the healthy backends, without any interruption.

  • There's no intermediate device or single point of failure.
  • Client requests to the load balancer's IP address go directly to the healthy backend VMs.
  • Responses from the healthy backend VMs go directly to the clients, not back through the load balancer. TCP responses use direct server return.

The protocol of the health check does not have to match the protocol of the load balancer, Regardless of the type of health check that you create, Google Cloud sends health check probes to the IP address of the internal TCP/UDP load balancer's forwarding rule, to the network interface in the VPC selected by the load balancer's backend service.

Note : The internal TCP/UDP load balancers use health check status to determine how to route new connections, as described in Traffic distribution.

The way that an internal TCP/UDP load balancer distributes new connections depends on whether you have configured failover:

  • If you haven't configured failover, an internal TCP/UDP load balancer distributes new connections to its healthy backend VMs if at least one backend VM is healthy. When all backend VMs are unhealthy, the load balancer distributes new connections among all backends as a last resort. In this situation, the load balancer routes each new connection to an unhealthy backend VM.
  • If you have configured failover, an internal TCP/UDP load balancer distributes new connections among VMs in its active pool, according to a failover policy that you configure. When all backend VMs are unhealthy, you can choose from one of the following behaviors

Upvotes: 1

Related Questions