Reputation: 11
When I performed a stress test on nginx, the nginx deployment scaled, but the newly created nginx pod did not have any load. If I stop the stress test for two minutes, all pods will start working normally. As shown in the picture below: image
Once the pod is created and running through hpa, it can participate in load balancing normally.
Create bitnami/nginx use helm:
# helm get values nginx -ntest-p1
USER-SUPPLIED VALUES:
autoscaling:
enabled: false
maxReplicas: 40
minReplicas: 1
targetCPU: 30
targetMemory: 30
resources:
limits:
cpu: 200m
memory: 128Mi
requests:
cpu: 200m
memory: 128Mi
My test tool: http_load If I start a stress test during scaledown, it told me no route to host err
Kubernetes version (use kubectl version): v1.18.8-aliyun.1
Cloud provider or hardware configuration: aliyun
OS (e.g: cat /etc/os-release): Alibaba Cloud Linux (Aliyun Linux) 2.1903 LTS (Hunting Beagle)
Kernel (e.g. uname -a): Linux 4.19.91-23.al7.x86_64 #1 SMP Tue Mar 23 18:02:34 CST 2021 x86_64 x86_64 x86_64 GNU/Linux
Network plugin and version (if this is a network-related bug): flannel:v0.11.0.2-g6e46593e-aliyun
Others:
kube-proxy mode is ipvs, and other config is default.
same issue on github: https://github.com/kubernetes/kubernetes/issues/101887
Upvotes: 1
Views: 779
Reputation: 12039
I am not familiar with http_load and the documentation is quite sparse. From your observation I assume that http_load uses HTTP keepalive, therefore reusing TCP connections. Kubernetes does loadbalancing on the TCP level, so only new connections will reach the added replicas.
You can either configure nginx to not provide keepalive, which will reduce the efficiency for regular use cases or start multiple http_load instances once the scale up has occurred to observe the effects.
Upvotes: 2