Reputation: 7069
I have simple web application deployed to Kubernetes cluster (EKS) with aws load balancer controller ingress
When accessing app in the intended way on ALB endpoint the performance is very poor (2-3x worse than regular deployment on bare metal instance). Benchmark was done with Hey
$ hey -t 30 -z 1m https://k8s-default-ingre-fdeb4c8b98-1975505070.us-east-1.elb.amazonaws.com/
# 5-10 reqs/s
$ hey -t 30 -z 1m http://172.16.3.37/ # from another pod accessing directly by its IP
# 20-30 reqs/s
If accessed from same/another pod or from different instance(node) when exposed as a NodePort
performance stays the same, so I'm assuming something wrong with ingress/ALB.
How to identify bottleneck and debug such kind of an issue?
Here's my config
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/healthcheck-path: "/healthz/"
alb.ingress.kubernetes.io/certificate-arn: "arn:aws:acm:us-east-1::certificate/cert"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ssl-redirect
port:
name: use-annotation
- path: /
pathType: Prefix
backend:
service:
name: app
port:
number: 80
Upvotes: 1
Views: 1293
Reputation: 907
This shows how pod2pod and pod2svc latencies are on both clusters. If drastically different, it must probably be the underlying network -
This would probably give you an idea of how nodes separated far apart on EKS cluster impact latencies, uneven networking bandwidth, etc.
If the latencies are somewhat similar, you might want to dig into your application code using something like jaeger or application profiler to get breakdown of latencies.
Upvotes: 1