Reputation: 179
We want to test kubernetes load balancing. So we create a 2 node cluster thats runs 6 replicas of our container. Container has running apache2 server and php and it will print pod name if we browse hostname.php
Cluster details: 172.16.2.92 -- master and minion 172.16.2.91 -- minion
RC and service details:
frontend-controller.json:
{
"kind":"ReplicationController",
"apiVersion":"v1beta3",
"metadata":{
"name":"frontend",
"labels":{
"name":"frontend"
}
},
"spec":{
"replicas":6,
"selector":{
"name":"frontend"
},
"template":{
"metadata":{
"labels":{
"name":"frontend"
}
},
"spec":{
"containers":[
{
"name":"php-hostname",
"image":"naresht/hostname",
"ports":[
{
"containerPort":80,
"protocol":"TCP"
}
]
}
]
}
}
}
}
frontend-service.json:
{
"kind":"Service",
"apiVersion":"v1beta3",
"metadata":{
"name":"frontend",
"labels":{
"name":"frontend"
}
},
"spec":{
"createExternalLoadBalancer": true,
"ports": [
{
"port":3000,
"targetPort":80,
"protocol":"TCP"
}
],
"publicIPs": [ "172.16.2.92"],
"selector":{
"name":"frontend"
}
}
}
Pod details: frontend-01bb8, frontend-svxfl and frontend-yki5s are running on node 172.16.2.91 frontend-65ykz , frontend-c1x0d and frontend-y925t are running on node 172.16.2.92
If we browse for 172.16.2.92:3000/hostname.php, it prints POD name.
Problem:
Running watch -n1 curl 172.16.2.92:3000/hostname.php on node 172.16.2.92 gives only that pods(frontend-65ykz , frontend-c1x0d and frontend-y925t ). They are not showing other node 172.16.2.91 pods. Running same command on node 172.16.2.91 gives only that pods. They are not showing other node 172.16.2.92 pods. Running same command outside of cluster showing only 172.16.2.92 pods. But we want to see all pods not specific node pods, if we run wherever.
Check below details for more information and help you if anything wrong
# kubectl get nodes
NAME LABELS STATUS
172.16.2.91 kubernetes.io/hostname=172.16.2.91 Ready
172.16.2.92 kubernetes.io/hostname=172.16.2.92 Ready
# kubectl get pods
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
frontend-01bb8 172.17.0.84 172.16.2.91/172.16.2.91 name=frontend Running About a minute
php-hostname naresht/hostname Running About a minute
frontend-65ykz 10.1.64.79 172.16.2.92/172.16.2.92 name=frontend Running About a minute
php-hostname naresht/hostname Running About a minute
frontend-c1x0d 10.1.64.77 172.16.2.92/172.16.2.92 name=frontend Running About a minute
php-hostname naresht/hostname Running About a minute
frontend-svxfl 172.17.0.82 172.16.2.91/172.16.2.91 name=frontend Running About a minute
php-hostname naresht/hostname Running About a minute
frontend-y925t 10.1.64.78 172.16.2.92/172.16.2.92 name=frontend Running About a minute
php-hostname naresht/hostname Running About a minute
frontend-yki5s 172.17.0.83 172.16.2.91/172.16.2.91 name=frontend Running About a minute
php-hostname naresht/hostname Running About a minute
kube-dns-sbgma 10.1.64.11 172.16.2.92/172.16.2.92 k8s-app=kube-dns,kubernetes.io/cluster-service=true,name=kube-dns Running 45 hours
kube2sky gcr.io/google_containers/kube2sky:1.1 Running 45 hours
etcd quay.io/coreos/etcd:v2.0.3 Running 45 hours
skydns gcr.io/google_containers/skydns:2015-03-11-001 Running 45 hours
# kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S)
frontend name=frontend name=frontend 192.168.3.184 3000/TCP
kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,name=kube-dns k8s-app=kube-dns 192.168.3.10 53/UDP
kubernetes component=apiserver,provider=kubernetes <none> 192.168.3.2 443/TCP
kubernetes-ro component=apiserver,provider=kubernetes <none> 192.168.3.1 80/TCP
# iptables -t nat -L
Chain KUBE-PORTALS-CONTAINER (1 references)
target prot opt source destination
REDIRECT tcp -- anywhere 192.168.3.184 /* default/frontend: */ tcp dpt:3000 redir ports 50734
REDIRECT tcp -- anywhere kube02 /* default/frontend: */ tcp dpt:3000 redir ports 50734
REDIRECT udp -- anywhere 192.168.3.10 /* default/kube-dns: */ udp dpt:domain redir ports 52415
REDIRECT tcp -- anywhere 192.168.3.2 /* default/kubernetes: */ tcp dpt:https redir ports 33373
REDIRECT tcp -- anywhere 192.168.3.1 /* default/kubernetes-ro: */ tcp dpt:http redir ports 60311
Chain KUBE-PORTALS-HOST (1 references)
target prot opt source destination
DNAT tcp -- anywhere 192.168.3.184 /* default/frontend: */ tcp dpt:3000 to:172.16.2.92:50734
DNAT tcp -- anywhere kube02 /* default/frontend: */ tcp dpt:3000 to:172.16.2.92:50734
DNAT udp -- anywhere 192.168.3.10 /* default/kube-dns: */ udp dpt:domain to:172.16.2.92:52415
DNAT tcp -- anywhere 192.168.3.2 /* default/kubernetes: */ tcp dpt:https to:172.16.2.92:33373
DNAT tcp -- anywhere 192.168.3.1 /* default/kubernetes-ro: */ tcp dpt:http to:172.16.2.92:60311
Thanks
Upvotes: 4
Views: 604
Reputation: 179
Because flannel is not working properly so do
/root/kube/reconfDocker.sh on everynode
it will restart the docker and flannel then check the ifconfig docker0 and flannel0 bridge IPs should be in same network. Then load balancing will work. It works for me.
Upvotes: 1
Reputation: 315
It seems to me problem is in networking configuration. Pods in host 172.16.2.91 has IP address 172.17.0.xx which maybe accessable form the other host, i.e. 172.16.2.92
If ping fail, then please check your networking against kubernetes requirement: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/networking.md Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies): •all containers can communicate with all other containers without NAT •all nodes can communicate with all containers (and vice-versa) without NAT •the IP that a container sees itself as is the same IP that others see it as
Upvotes: 0