Jatinshravan
Jatinshravan

Reputation: 445

Kubernetes LoadBalancer type service's external IP is unreachable from pods within the cluster when externalTrafficPolicy is set to Local in GCE

The external IP is perfectly reachable from outside the cluster. It's perfectly reachable from all nodes within the cluster. However, when I try to telnet to the URL from a pod within the cluster that is not on the same node as a pod that is part of the service backend, the connection always times out.

The external IP is reachable by pods that run on the same node as a pod that is part of the service backend.

All pods can perfectly reach the cluster IP of the service.

When I set externalTrafficPolicy to Cluster, the pods are able to reach the external URL regardless of what node they're on.

I am using iptables proxying and kubernetes 1.16

I'm completely at a loss here as to why this is happening. Is someone able to shed some light on this?

Upvotes: 0

Views: 1037

Answers (1)

Ryan Siu
Ryan Siu

Reputation: 1002

From the official doc here,

service.spec.externalTrafficPolicy - denotes if this Service desires to route external traffic to node-local or cluster-wide endpoints. There are two available options: Cluster (default) and Local. Cluster obscures the client source IP and may cause a second hop to another node, but should have good overall load-spreading. Local preserves the client source IP and avoids a second hop for LoadBalancer and NodePort type services, but risks potentially imbalanced traffic spreading.

The service could be either node-local or cluster-wide endpoints. When you define the externalTrafficPolicy as Local, it means node-local. So, other nodes are not able to reach it.

So, you will need to set the externalTrafficPolicy as Cluster instead.

Upvotes: 1

Related Questions