Reputation: 31
Trying to understand how sticky session should be configured when working with service type=loadbalancer in AWS My backend are 2 pods running tomcat app I see that the service create the AWS LB as well and I set the right cookie value in the AWS LB configuration ,but when accessing the system I see that I keep switching between my pods/tomcat instances
My service configuration
kind: Service
apiVersion: v1
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
labels:
app: app1
name: AWSELB
namespace: local
spec:
type: LoadBalancer
ports:
- port: 8080
targetPort: 8080
selector:
app: app1
Is there any additional settings that are missing? Thank you Jack
Upvotes: 2
Views: 4671
Reputation: 597
kube proxy is L4. If your source IP is the same, you can't distinguish clients. You need a L7 proxy to read those details. Check Ingress: https://github.com/kubernetes/ingress/tree/master/examples/affinity/cookie/nginx This uses cookies to identify your user and skips the L4 kube-proxy invoking the pod directly.
Upvotes: -1
Reputation: 583
It's not supported. Please see https://github.com/kubernetes/kubernetes/issues/2867 which has the gory details.
Upvotes: 1
Reputation: 3464
Can you try setting Client-IP based session affinity by setting service.spec.sessionAffinity to "ClientIP" (the default is "None"). (http://kubernetes.io/docs/user-guide/services/)
You can also try running an ingress controller which can better manage the routing internal, see: https://github.com/kubernetes/kubernetes/issues/13892#issuecomment-223731222
Upvotes: 1