Reputation: 163
I'm trying to run a Metabase deployment in an existing, working, EKS kubernetes cluster, but keep getting 503 Service Temporarily Unavailable.
The pod is running, the ports seem to be assigned correctly and I have an Application Load balancer running and forwarded to the metabase service.
Service:
apiVersion: v1
kind: Service
metadata:
name: metabase-service
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 3000
selector:
app: metabase
type: ClusterIP
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
generation: 1
labels:
deployment: metabase
name: metabase
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: metabase
template:
metadata:
labels:
app: metabase
spec:
containers:
- name: metabase
image: metabase/metabase
imagePullPolicy: Always
ports:
- containerPort: 3000
protocol: TCP
Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: default
spec:
ingressClassName: alb
rules:
- host: staging.my-app.nl
http:
paths:
- backend:
service:
name: my-app-service
port:
number: 80
path: /
pathType: Prefix
- host: metabase.my-app.nl
http:
paths:
- backend:
service:
name: metabase-service
port:
number: 80
path: /
pathType: Prefix
tls:
- hosts:
- staging.my-app.nl
secretName: myapp-certs-2022
- hosts:
- metabase.my-app.nl
secretName: myapp-certs-2022
output from kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE
my-ingress alb staging.my-app.nl,metabase.my-app.nl k8s-default-my-app-12334444444.eu-central-1.elb.amazonaws.com 80, 443 206d
output from kubectl get svc
metabase ClusterIP 10.100.127.227 <none> 80/TCP 56m
The single pod is running and logs show no errors.
The rest of the cluster and services are working without issues.
Can anyone point me in a direction as to why my service seems to be unavailable?
Upvotes: 0
Views: 350
Reputation: 163
In the logs of the aws-loadbalancer-controller I found:
"error":"ResourceInUse: Listener port '80' is in use by registered target
With the help of this article I learned you can share multiple ingresses with one ALB as long as you give them the same group name.
alb.ingress.kubernetes.io/group.name
So I moved all resources of the new app (Metabase) to a new namespace, added an extra ingress and to both ingresses added a group name like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.name: dev
For me it did start up a new loadbalancer for the groupname, so I had to delete the old one.
Upvotes: 1