Aman
Aman

Reputation: 295

Getting 502 Bad Gateway with Ingress GKE & Ingress Showing warning "Some backend Service are in UNHEALTHY state"

I have create a cluster with one deployment, yaml deployment and service are mentioned below, I am able to access the service using internal load balancer ip, but using ingress the ip which I receive gives 502 Bad gateway error, also on Ingress Status it says "Some backend Service are in UNHEALTHY state"

kubectl get services

W0616 13:40:33.177300    2655 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
NAME              TYPE           CLUSTER-IP   EXTERNAL-IP      PORT(S)        AGE
kubernetes        ClusterIP      10.78.0.1    <none>           443/TCP        103m
sso-dev-service   LoadBalancer   10.78.2.34   35.221.253.217   80:31774/TCP   94m"

Deployment Yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    autopilot.gke.io/resource-adjustment: '{"input":{"containers":[{"name":"cent-sha256-1"}]},"output":{"containers":[{"limits":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"2Gi"},"requests":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"2Gi"},"name":"cent-sha256-1"}]},"modified":true}'
    deployment.kubernetes.io/revision: "2"
  creationTimestamp: "2022-06-16T12:04:04Z"
  generation: 6
  labels:
    app: sso-dev
  managedFields:
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        f:replicas: {}
    manager: vpa-recommender
    operation: Update
    subresource: scale
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          .: {}
          f:app: {}
      f:spec:
        f:progressDeadlineSeconds: {}
        f:revisionHistoryLimit: {}
        f:selector: {}
        f:strategy:
          f:rollingUpdate:
            .: {}
            f:maxSurge: {}
            f:maxUnavailable: {}
          f:type: {}
        f:template:
          f:metadata:
            f:labels:
              .: {}
              f:app: {}
          f:spec:
            f:containers:
              k:{"name":"cent-sha256-1"}:
                .: {}
                f:image: {}
                f:imagePullPolicy: {}
                f:name: {}
                f:resources: {}
                f:terminationMessagePath: {}
                f:terminationMessagePolicy: {}
            f:dnsPolicy: {}
            f:restartPolicy: {}
            f:schedulerName: {}
            f:securityContext: {}
            f:terminationGracePeriodSeconds: {}
    manager: GoogleCloudConsole
    operation: Update
    time: "2022-06-16T13:25:31Z"
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:deployment.kubernetes.io/revision: {}
      f:status:
        f:availableReplicas: {}
        f:conditions:
          .: {}
          k:{"type":"Available"}:
            .: {}
            f:lastTransitionTime: {}
            f:lastUpdateTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
          k:{"type":"Progressing"}:
            .: {}
            f:lastTransitionTime: {}
            f:lastUpdateTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
        f:observedGeneration: {}
        f:readyReplicas: {}
        f:replicas: {}
        f:updatedReplicas: {}
    manager: kube-controller-manager
    operation: Update
    subresource: status
    time: "2022-06-16T13:25:31Z"
  name: sso-dev
  namespace: default
  resourceVersion: "59379"
  uid: 04c78a81-3828-48d0-8613-1ed4866c788a
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: sso-dev
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: sso-dev
    spec:
      containers:
      - image: us-east4-docker.pkg.dev/centegycloud-351515/sso/cent@sha256:adc66157a00ef08ff912460b1383dfc729c37af4574a2a6a8a031e03a790a7ca
        imagePullPolicy: IfNotPresent
        name: cent-sha256-1
        resources:
          limits:
            cpu: 500m
            ephemeral-storage: 1Gi
            memory: 2Gi
          requests:
            cpu: 500m
            ephemeral-storage: 1Gi
            memory: 2Gi
        securityContext:
          capabilities:
            drop:
            - NET_RAW
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2022-06-16T12:06:55Z"
    lastUpdateTime: "2022-06-16T12:06:55Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2022-06-16T13:33:17Z"
    lastUpdateTime: "2022-06-16T13:33:17Z"
    message: ReplicaSet "sso-dev-7cf94546d5" has timed out progressing.
    reason: ProgressDeadlineExceeded
    status: "False"
    type: Progressing
  observedGeneration: 6
  readyReplicas: 1
  replicas: 2
  unavailableReplicas: 1
  updatedReplicas: 1

Service Yaml

    apiVersion: v1
kind: Service
metadata:
  annotations:
    cloud.google.com/neg: '{"ingress":true}'
    cloud.google.com/neg-status: '{"network_endpoint_groups":{"80":"k8s1-63bcc32c-default-sso-dev-service-80-163de44a"},"zones":["asia-east1-b","asia-east1-c"]}'
  creationTimestamp: "2022-06-16T12:06:08Z"
  finalizers:
  - service.kubernetes.io/load-balancer-cleanup
  labels:
    app: sso-dev
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          .: {}
          f:app: {}
      f:spec:
        f:allocateLoadBalancerNodePorts: {}
        f:externalTrafficPolicy: {}
        f:internalTrafficPolicy: {}
        f:ports:
          .: {}
          k:{"port":80,"protocol":"TCP"}:
            .: {}
            f:port: {}
            f:protocol: {}
            f:targetPort: {}
        f:selector: {}
        f:sessionAffinity: {}
        f:type: {}
    manager: GoogleCloudConsole
    operation: Update
    time: "2022-06-16T12:06:08Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers:
          .: {}
          v:"service.kubernetes.io/load-balancer-cleanup": {}
      f:status:
        f:loadBalancer:
          f:ingress: {}
    manager: kube-controller-manager
    operation: Update
    subresource: status
    time: "2022-06-16T12:06:53Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:cloud.google.com/neg-status: {}
    manager: glbc
    operation: Update
    subresource: status
    time: "2022-06-16T13:13:23Z"
  name: sso-dev-service
  namespace: default
  resourceVersion: "48308"
  uid: 99d9f322-1497-4213-8715-306a9a8baf50
spec:
  allocateLoadBalancerNodePorts: true
  clusterIP: 10.78.2.34
  clusterIPs:
  - 10.78.2.34
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - nodePort: 31774
    port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: sso-dev
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 35.221.253.217

Ingress.yaml

    apiVersion: networking.k8s.io/v1 kind: Ingress metadata:   annotations:
    ingress.kubernetes.io/backends: '{"k8s-be-32126--63bcc32c813af5d1":"HEALTHY","k8s1-63bcc32c-default-sso-dev-service-80-163de44a":"UNHEALTHY"}'
    ingress.kubernetes.io/forwarding-rule: k8s2-fr-edkqbxd1-default-ingress-test-boc5ft42
    ingress.kubernetes.io/target-proxy: k8s2-tp-edkqbxd1-default-ingress-test-boc5ft42
    ingress.kubernetes.io/url-map: k8s2-um-edkqbxd1-default-ingress-test-boc5ft42   creationTimestamp: "2022-06-16T13:13:23Z"   finalizers:
  - networking.gke.io/ingress-finalizer-V2   generation: 1   managedFields:
  - apiVersion: networking.k8s.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        f:rules: {}
    manager: GoogleCloudConsole
    operation: Update
    time: "2022-06-16T13:13:23Z"
  - apiVersion: networking.k8s.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:ingress.kubernetes.io/backends: {}
          f:ingress.kubernetes.io/forwarding-rule: {}
          f:ingress.kubernetes.io/target-proxy: {}
          f:ingress.kubernetes.io/url-map: {}
        f:finalizers:
          .: {}
          v:"networking.gke.io/ingress-finalizer-V2": {}
      f:status:
        f:loadBalancer:
          f:ingress: {}
    manager: glbc
    operation: Update
    subresource: status
    time: "2022-06-16T13:15:11Z"   name: ingress-test   namespace: default   resourceVersion: "49795"   uid: 7f1a8710-322c-41e2-9a53-1ce2d0eb0ced spec:   rules:
  - http:
      paths:
      - backend:
          service:
            name: sso-dev-service
            port:
              number: 80
        path: /*

Upvotes: 0

Views: 1909

Answers (1)

MBHA Phoenix
MBHA Phoenix

Reputation: 2217

I see you're using a service of type LoadBalancer while setting cloud.google.com/neg: '{"ingress":true}' annotation at the service resource.

You must use a service of type ClusterIP as stated here in Container-native load balancing documentation :

In the Service manifest, you must use type: NodePort unless you're using container native load balancing. If using container native load balancing, use the type: ClusterIP.

Upvotes: 2

Related Questions