Jonny
Jonny

Reputation: 2927

nginx-ingress: Too many redirects when force-ssl is enabled

I am setting up my first ingress in kubernetes using nginx-ingress. I set up the ingress-nginx load balancer service like so:

{
  "kind": "Service",
  "apiVersion": "v1",
  "metadata": {
    "name": "ingress-nginx",
    "namespace": "...",
    "labels": {
      "k8s-addon": "ingress-nginx.addons.k8s.io"
    },
    "annotations": {     
      "service.beta.kubernetes.io/aws-load-balancer-backend-protocol": "tcp",
      "service.beta.kubernetes.io/aws-load-balancer-proxy-protocol": "*",
      "service.beta.kubernetes.io/aws-load-balancer-ssl-cert": "arn....",
      "service.beta.kubernetes.io/aws-load-balancer-ssl-ports": "443"
    }
  },
  "spec": {
    "ports": [
      {
        "name": "http",
        "protocol": "TCP",
        "port": 80,
        "targetPort": "http",
        "nodePort": 30591
      },
      {
        "name": "https",
        "protocol": "TCP",
        "port": 443,
        "targetPort": "http",
        "nodePort": 32564
      }
    ],
    "selector": {
      "app": "ingress-nginx"
    },
    "clusterIP": "...",
    "type": "LoadBalancer",
    "sessionAffinity": "None",
    "externalTrafficPolicy": "Cluster"
  },
  "status": {
    "loadBalancer": {
      "ingress": [
        {
          "hostname": "blablala.elb.amazonaws.com"
        }
      ]
    }
  }
}

Notice how the https port has its targetPort property pointing to port 80 (http) in order to terminate ssl at the load balancer.

My ingress looks something like this:

apiVersion: extensions/v1beta1
kind: Ingress
metadata: 
  name: something
  namespace: ...
  annotations:
    ingress.kubernetes.io/ingress.class: "nginx"
    ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
  rules:
    - host: www.exapmle.com
      http:
        paths:
         - path: /
           backend:
            serviceName: some-service
            servicePort: 2100

Now when I navigate to the url I get a Too many redirects error. Something that is confusing me is that when I add the following header "X-Forwarded-Proto: https" I get the expected response (curl https://www.example.com -v -H "X-Forwarded-Proto: https").

Any ideas how I can resolve the issue?

P.S. this works just fine with ingress.kubernetes.io/force-ssl-redirect: "false" and it doesn't seem that there are any extraneous redirects.

Upvotes: 12

Views: 37825

Answers (6)

Devqxz
Devqxz

Reputation: 111

Adding this annotation: nginx.ingress.kubernetes.io/backend-protocol: HTTPS

fixed it for me. full example here: https://github.com/argoproj/argoproj-deployments/blob/master/argo-workflows/resources/argo-server-ingress.yaml

Upvotes: 3

Tara Prasad Gurung
Tara Prasad Gurung

Reputation: 3559

I had this issues in Keycloak setup via helm chart as well. The SSL termination is done on ELB so to fix it . I made the following changes in helm values.

ingress:
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "false"

This helped me fixed it.

Upvotes: 0

Rotem jackoby
Rotem jackoby

Reputation: 22128

Adding another cause for the Too many redirects error.

While working with ingress-nginx as an ingress controller in front of some k8s services.

One of the services (ArgoCD in my case) handled TLS termination by itself and always redirects HTTP requests to HTTPS.

The problem is that the nginx ingress controller also handled TLS termination and communicates with the backend service with HTTP then the result is that the ArgoCD's server always responding with a redirects to HTTPS which is the cause for the multiple redirects.

Any attempts to pass relevant values to the ingress annotations below will not help:

annotations:
  nginx.ingress.kubernetes.io/ssl-redirect: false/true
  nginx.ingress.kubernetes.io/backend-protocol: "HTTP"/"HTTPS"

The solution was to ensure that the service doesn't handle TLS by passing --insecure flag to the argocd-server deployment:

spec:

  template:
    spec:
      containers:
      - name: argocd-server
        command:
        - argocd-server
        - --repo-server
        - argocd-repo-server:8081
        - --insecure # <-- Here

Upvotes: 19

dpolicastro
dpolicastro

Reputation: 1519

I had to add these annotations to make it work without changing the ingress-controller:

    annotations:
      kubernetes.io/ingress.class: nginx-ingress-internal # <- AWS NLB
      nginx.ingress.kubernetes.io/ssl-redirect: "true"
      nginx.ingress.kubernetes.io/configuration-snippet: |
        if ($http_x_forwarded_proto = 'http') {
          return 301 https://$host$request_uri;
        }

Upvotes: 6

stipx
stipx

Reputation: 41

Another approach that worked for my environment (k8s v1.16.15, rancher/nginx-ingress-controller:nginx-0.32.0-rancher1):

apiVersion: v1
data:
  compute-full-forwarded-for: "true"
  use-forwarded-headers: "true"
kind: ConfigMap
metadata:
  labels:
    app: ingress-nginx
  name: nginx-configuration
  namespace: ingress-nginx

This worked with the force-ssl-redirect on the ingress of the application. It seems that the ingress-controller does not use the X-Forwarded-Proto header from the ELB out of the box.

Upvotes: 2

Anton Kostenko
Anton Kostenko

Reputation: 8983

That is a known issue with the annotation for SSL-redirection in combination with proxy-protocol and termination of SSL connections on ELB.

Question about it was published on GitHub and here is a fix from that thread:

  1. You should create a custom ConfigMap for an Nginx-Ingress instead of using force-ssl-redirect annotation like the following:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      labels:
        app: ingress-nginx
      name: nginx-ingress-configuration
      namespace: <ingress-namespace>
    data:
      ssl-redirect: "false"
      hsts: "true"
      server-tokens: "false"
      http-snippet: |
        server {
          listen 8080 proxy_protocol;
          server_tokens off;
          return 301 https://$host$request_uri;
        }
    

    That configuration will create an additional listener with a simple redirection to https.

  2. Then, apply that ConfigMap to your ingress-controller, add NodePort 8080 to its container definition and to the Service.
  3. Now, you can point the port 80 of your ELB to port 8080 of the Service.

With that additional listener, it will work.

Upvotes: 11

Related Questions