Reputation: 2952
I am testing to use security group for pods following the tutorial in here: https://docs.aws.amazon.com/eks/latest/userguide/security-groups-for-pods.html
I was able to deploy it successfully, as I can see my pods annotated with: vpc.amazonaws.com/pod-eni:[eni etc]
, and I can successfully confirm in the AWS console that a new ENI is created with the same private IP as the pods, and the selected security group is attached to the created ENI.
For testing purposes, I have this security group to accept all traffic. This means that all my pods can reach each other under any port. I can also confirm that DNS resolution can be done from any pod as I can reach services outside AWS (namely curl google / facebook etc) My only problem is that I can't seem to be able to reach the pod from the same node is being executed (on any port). The weird part is that I can reach the pod from any other node in which the pod doesn't live (So in a 3 node EKS cluster, if pod "pod-A" runs in node1, then I can reach only "pod-A" from node2 and node3, but not from node1).
This is a problem because the kubelet in that node is failing to pass all the http liveness/readiness checks, and my statefulset never comes up (I would assume this will also be a problem for deployments, although I haven't tried) Like I said, I do get the security groups for pod successfully deployed but I am having a hard time understanding why I can't reach a pod from the same node, even though, I have set All Traffic for that security group.
eks version: eks.3
kubernetes version: 1.17
cni: amazon-k8s-cni-init:v1.7.4
amazon-k8s-cni:v1.7.4
Upvotes: 1
Views: 2294
Reputation: 2952
I was never going to figure that out so I asked in the aws cni repo as well, and they answered it: https://github.com/aws/amazon-vpc-cni-k8s/issues/1260
Upvotes: 1