suku
suku

Reputation: 471

502 bad gateway error while launching the angular application after deploying to Kubernetes cluster

I am deploying angular application on kubernetes, After deployment pod is up and running, but when I am trying to access the application through ingress it is giving 502 bad gateway error. The application was working fine until I made some recent functional changes and redeploy using the yaml/config files that was used for the initial deployment. I'm clueless what is wrong here

Note:

1.This is not the duplicate of 72064326, as the server is listening to correct port on nginx.conf

Here are my files

1.Docker file

# stage1 as builder
FROM node:16.14.0 as builder


FROM nginx:alpine

#!/bin/sh
## Remove default nginx config page
# RUN rm -rf /etc/nginx/conf.d

# COPY ./.nginx/nginx.conf /etc/nginx/nginx.conf

COPY ./.nginx/nginx.conf /etc/nginx/conf.d/default.conf

## Remove default nginx index page
RUN rm -rf /usr/share/nginx/html/*

# Copy from the stahg 1
COPY dist/appname  /usr/share/nginx/html

EXPOSE **8080**

ENTRYPOINT ["nginx", "-g", "daemon off;"]

nginx.conf (custom nginx)

  server {   
        listen 8080;
          
        root /usr/share/nginx/html;
        include /etc/nginx/mime.types;

    
  location /appname/ {  
    root /usr/share/nginx/html;
    index index.html index.htm;
    try_files $uri $uri/ /index.html =404;
  }

  location ~ \.(js|css) {
    root /usr/share/nginx/html;
     # try finding the file first, if it's not found we fall
     # back to the meteor app
    try_files $uri  /index.html =404;
  }
}

3.Deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    com.xxx.path: /platform/custom
  name: appname 
  namespace: yyyyyy
  
spec:
  selector:
    matchLabels:
      io.kompose.service: appname 
  replicas: 1
  template:
    metadata:
      labels:
        clusterName: custom2
        department: customplatform
        io.kompose.service: appname 
        com.xxxx.monitor: application
        com.xxxx.platform: custom
        com.xxxx.service: appname
    spec:
      containers:
      - env:
        - name: ENVIRONMENT
          value: yyyyyy
        resources:
           requests:
             memory: "2048Mi"     
           limits:
             memory: "4096Mi"
        image: cccc.rrr.xxxx.aws/imgrepo/imagename:latest 
        imagePullPolicy: Always
        securityContext:
        name: image
        ports:
        - containerPort: 8080
      restartPolicy: Always


Service.yaml

kind: Service
metadata:
  annotations:
    com.xxxx.path: /platform/custom
  labels:
    clusterName: custom2
    department: customplatform
    io.kompose.service: appname
    com.xxxx.monitor: application
    com.xxxx.platform: custom
    com.xxxx.service: appname
  name: appname
  namespace: yyyyyy
spec:
  ports:
  - name: "appname"
    port: **8080**
    targetPort: 8080 
  selector:
    io.kompose.service: appname

5.Ingress

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
 name: custom-ingress
 namespace: yyyyyy
 annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/proxy-redirect-from: "http://custom-yyyyyy.dev.xxxx.aws:8080/" 
    nginx.ingress.kubernetes.io/proxy-redirect-to: "$scheme://$http_host/"
spec:
  rules:
  - host: custom-yyyyyy.dev.xxxx.aws
    http:
      paths:
      - backend:
          serviceName: appname
          servicePort: 8080
        path: /appname


```[![appliction screenshot][1]][1]


  [1]: https://i.sstatic.net/CX3k1.png

Upvotes: 0

Views: 1253

Answers (1)

Gonfva
Gonfva

Reputation: 1468

The screenshot you have attached shows an nginx error. Initially I thought it meant that it was a configuration error on your pod (an error in the actual container).

But then I noticed you are using an NGINX ingress controller, so most likely the issue is in the ingress controller.

I would proceed mechanically as with anything related with Kubernetes ingress.

In particular:

  1. Check the logs on the ingress controller, for error messages. In particular, I don't have experience with NGINX ingress controller, but health checking in mixed protocols (https external, http in the service) tend to be tricky. With ALB controller, I always check that the target groups have backend services. And in your case I would first test without the redirect-from and redirect-to annotations. Again I haven't used NGINX controller but "$scheme://$http_host/" looks strange.
  2. Check that the service has endpoints defined (kubectl endpoints appname -n yyyyyy) which will tell you if the pods are running and if the service is connected to the pods.

Upvotes: 1

Related Questions