chr0nk
chr0nk

Reputation: 27

Can you work around the 1 pod/node/container 1 loadbalancer in Kubernetes?

As a learning project, I've currently got a honeypot running in Kubernetes, which works fine. (only sad thing is that I can't see actual SRC IP's, because everything from K8s perspective is coming from the loadbalancer).

I want to make a cluster of honeypots and eventually make an ELK backend to which all of the logs will be send and visualise some of it. Now I can't seem to figure out how to use 1 loadbalancer with different ports for different containers. Is there a better way to tackle this problem? I kind of get the 1 service 1 loadbalancer thing, but I'm sure I'm not the only one who face(d)(s) this problem?

Any help is appreciated. Thanks in advance.

Upvotes: 0

Views: 321

Answers (1)

mario
mario

Reputation: 11098

When it comes to preserving client's source IP when using external load balancer, this fragment of the official kubernetes documentation should fully answer your question:

Preserving the client source IP

Due to the implementation of this feature, the source IP seen in the target container is not the original source IP of the client. To enable preservation of the client IP, the following fields can be configured in the service spec (supported in GCE/Google Kubernetes Engine environments):

  • service.spec.externalTrafficPolicy - denotes if this Service desires to route external traffic to node-local or cluster-wide endpoints. There are two available options: Cluster (default) and Local. Cluster obscures the client source IP and may cause a second hop to another node, but should have good overall load-spreading. Local preserves the client source IP and avoids a second hop for LoadBalancer and NodePort type services, but risks potentially imbalanced traffic spreading.
  • service.spec.healthCheckNodePort - specifies the health check node port (numeric port number) for the service. If healthCheckNodePort isn't specified, the service controller allocates a port from your cluster's NodePort range. You can configure that range by setting an API server command line option, --service-node-port-range. It will use the user-specified healthCheckNodePort value if specified by the client. It only has an effect when type is set to LoadBalancer and externalTrafficPolicy is set to Local.

Setting externalTrafficPolicy to Local in the Service configuration file activates this feature.

apiVersion: v1
kind: Service
metadata:
  name: example-service
spec:
  selector:
    app: example
  ports:
    - port: 8765
      targetPort: 9376
  externalTrafficPolicy: Local ### 👈
  type: LoadBalancer

The key point is setting the externalTrafficPolicy to Local and it should entirely solve your problem with preserving the original source IP, but keep in mind that this setting has also some downsides. It could potentially lead to less equally balanced traffic. As you can read specifically in this fragment:

There are two available options: Cluster (default) and Local. Cluster obscures the client source IP and may cause a second hop to another node, but should have good overall load-spreading. Local preserves the client source IP and avoids a second hop for LoadBalancer and NodePort type services, but risks potentially imbalanced traffic spreading.

Upvotes: 1

Related Questions