Reputation: 138
I am trying to use a load balancer to direct traffic to a container backend. The service in the containers hosts web traffic on port 80
. My health checks are all passing. If I ssh into the Kubernetes host for the containers, I can curl each container and get correct responses over port 80
. When I try to access them through the load balancers external IP, however, I receive a 502 response. I have firewall rules allowing traffic from 130.211.0.0/22
on tcp:1-5000
and on the NodePort port. I've also tried adding firewall rules from 0.0.0.0/0
ports 80 and 443 to those nodes.
When in the Kubernetes host, capturing with tcpdump, I see the health check requests to my containers, but no traffic is coming through when I make an external request.
I have an identical configuration that points to a single Compute Engine VM that works perfectly. This leads me to believe that the issue might be in the container setup rather than the load balancer.
Does anyone have any advice on resolving this issue?
Upvotes: 1
Views: 2135
Reputation: 138
I was able to resolve the problem by changing the Named Port that the Load Balancer was connecting to. By default, the Load Balancer connected to Named Port "http", which pointed to port 80. It was my assumption (always a bad thing) that this matched since my application serves on port 80. Not so. Since I'd exposed the containers though NodePort, it was assigned another port. This is the port I had my Health Check pointing to. By going into "Compute Engine -> Instance groups" selecting the group, and then "Edit Group", I was able to change the Named Port "http" to match my NodePort number. Once I did that, traffic started flowing.
Upvotes: 2