suren
suren

Reputation: 8786

Real Client IP for TCP services - Nginx Ingress Controller

We have HTTP and TCP services behind Nginx Ingress Controller. The HTTP services are configured through an Ingress object, where when we get the request, through a snippet generate a header (Client-Id) and pass it to the service.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-pre
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/configuration-snippet: |
      if ($host ~ ^(?<client>[^\..]+)\.(?<app>pre|presaas).(host1|host2).com$) {
        more_set_input_headers 'Client-Id: $client';
      }
spec:
  tls:
  - hosts: 
    - "*.pre.host1.com" 
    secretName: pre-host1
  - hosts: 
    - "*.presaas.host2.com"
    secretName: presaas-host2

  rules:
  - host: "*.pre.host1.com"
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: service-front
            port:
              number: 80

  - host: "*.presaas.host2.com"
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: service-front
            port:
              number: 80

The TCP service is configured to connect directly, and it is done through a ConfigMap. These service connect through a TCP socket.

apiVersion: v1
data:
  "12345": pre/service-back:12345
kind: ConfigMap
metadata:
  name: tcp-service
  namespace: ingress-nginx

All this config works fine. The TCP clients connect fine through a TCP sockets and the users connect fine through HTTP. The problem is that the TCP clients, when the establish the connection, get the source IP address (their own IP, or in Nginx, $remote_addr) and report it back to an admin endpoint, where it is shown in a dashboard. So there is a dashboard with all the TCP clients connected, with their IP addresses. Now what happens is that all the IP addresses, instead of being the client ones are the one of the Ingress Controller (the pod).

I set use-proxy-protocol: "true", and it seems to resolve the issue for the TCP connections, as in the logs I can see different external IP addresses being connected, but now HTTP services do not work, including the dashboard itself. These are the logs:

while reading PROXY protocol, client: 1.2.3.4, server: 0.0.0.0:80
2022/04/04 09:00:13 [error] 35#35: *5273 broken header: "��d�hԓ�:�����ӝp��E�L_"�����4�<����0�,�(�$��
����kjih9876�w�s��������" while reading PROXY protocol, client: 1.2.3.4, server: 0.0.0.0:443

I know the broken header logs are from HTTP services, as if I do telnet to the HTTP port I get the broken header, and if I telnet to the TCP port I get clean logs with what I expect.

I hope the issue is clear. So, what I need is a way to configure Nginx Ingress Controller to get HTTP and TCP servies. I don't know if I can configure use-proxy-protocol: "true" parameter for only one service. It seems that this is a global parameter.

For now the solution we are thinking of is to set a new Network Load Balancer (this is running in a AWS EKS cluster) just for the TCP service, and leave the HTTP behind the Ingress Controller.

Upvotes: 1

Views: 935

Answers (1)

Irulandi Ganesan
Irulandi Ganesan

Reputation: 1

To solve this issue, go to NLB target groups and enable the proxy protocol version 2 in the attributes tab. Network LB >> Listeners >> TCP80/TCP443 >> select Target Group >> Attribute Tab >> Enable Proxy Protocol v2.

Upvotes: 0

Related Questions