Reputation: 601
I have installed Kubernetes on Bare-metal/Ubuntu. I am on 6b649d7f9f2b09ca8b0dd8c0d3e14dcb255432d1
commit in git. I used cd kubernetes/cluster; KUBERNETES_PROVIDER=ubuntu ./kube-up.sh
followed by cd kubernetes/cluster/ubuntu; ./deployAddons.sh
to start the cluster. Everything went fine and the cluster got up.
My /ubuntu/config-default.sh
is as follows:
# Define all your cluster nodes, MASTER node comes first"
# And separated with blank space like <user_1@ip_1> <user_2@ip_2> <user_3@ip_3>
export nodes=${nodes:-"[email protected] [email protected]"}
# Define all your nodes role: a(master) or i(minion) or ai(both master and minion), must be the order same
role=${role:-"ai i"}
# If it practically impossible to set an array as an environment variable
# from a script, so assume variable is a string then convert it to an array
export roles=($role)
# Define minion numbers
export NUM_NODES=${NUM_NODES:-2}
# define the IP range used for service cluster IPs.
# according to rfc 1918 ref: https://tools.ietf.org/html/rfc1918 choose a private ip range here.
export SERVICE_CLUSTER_IP_RANGE=${SERVICE_CLUSTER_IP_RANGE:-192.168.3.0/24} # formerly PORTAL_NET
# define the IP range used for flannel overlay network, should not conflict with above SERVICE_CLUSTER_IP_RANGE
export FLANNEL_NET=${FLANNEL_NET:-172.16.0.0/16}
# Optionally add other contents to the Flannel configuration JSON
# object normally stored in etcd as /coreos.com/network/config. Use
# JSON syntax suitable for insertion into a JSON object constructor
# after other field name:value pairs. For example:
# FLANNEL_OTHER_NET_CONFIG=', "SubnetMin": "172.16.10.0", "SubnetMax": "172.16.90.0"'
export FLANNEL_OTHER_NET_CONFIG
FLANNEL_OTHER_NET_CONFIG=''
# Admission Controllers to invoke prior to persisting objects in cluster
export ADMISSION_CONTROL=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,SecurityContextDeny
# Path to the config file or directory of files of kubelet
export KUBELET_CONFIG=${KUBELET_CONFIG:-""}
# A port range to reserve for services with NodePort visibility
SERVICE_NODE_PORT_RANGE=${SERVICE_NODE_PORT_RANGE:-"30000-32767"}
# Optional: Enable node logging.
ENABLE_NODE_LOGGING=false
LOGGING_DESTINATION=${LOGGING_DESTINATION:-elasticsearch}
# Optional: When set to true, Elasticsearch and Kibana will be setup as part of the cluster bring up.
ENABLE_CLUSTER_LOGGING=false
ELASTICSEARCH_LOGGING_REPLICAS=${ELASTICSEARCH_LOGGING_REPLICAS:-1}
# Optional: When set to true, heapster, Influxdb and Grafana will be setup as part of the cluster bring up.
ENABLE_CLUSTER_MONITORING="${KUBE_ENABLE_CLUSTER_MONITORING:-true}"
# Extra options to set on the Docker command line. This is useful for setting
# --insecure-registry for local registries.
DOCKER_OPTS=${DOCKER_OPTS:-""}
# Extra options to set on the kube-proxy command line. This is useful
# for selecting the iptables proxy-mode, for example.
KUBE_PROXY_EXTRA_OPTS=${KUBE_PROXY_EXTRA_OPTS:-""}
# Optional: Install cluster DNS.
ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}"
# DNS_SERVER_IP must be a IP in SERVICE_CLUSTER_IP_RANGE
DNS_SERVER_IP=${DNS_SERVER_IP:-"192.168.3.10"}
DNS_DOMAIN=${DNS_DOMAIN:-"cluster.local"}
DNS_REPLICAS=${DNS_REPLICAS:-1}
# Optional: Install Kubernetes UI
ENABLE_CLUSTER_UI="${KUBE_ENABLE_CLUSTER_UI:-true}"
# Optional: Enable setting flags for kube-apiserver to turn on behavior in active-dev
RUNTIME_CONFIG="--basic-auth-file=password.csv"
# Optional: Add http or https proxy when download easy-rsa.
# Add envitonment variable separated with blank space like "http_proxy=http://10.x.x.x:8080 https_proxy=https://10.x.x.x:8443"
PROXY_SETTING=${PROXY_SETTING:-""}
DEBUG=${DEBUG:-"false"}
Then, I created a pod using the following yml file:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
And a service using the following yml:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
ports:
- port: 8000
targetPort: 80
protocol: TCP
selector:
app: nginx
type: NodePort
Then, I got into the started container terminal using docker exec -it [CONTAINER_ID] bash
. There are mainly two problems:
The host's /etc/resolve.conf
file is as follows:
nameserver 8.8.8.8
nameserver 127.0.1.1
The container's /etc/resolve.conf
file is as follows:
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 192.168.3.10
nameserver 8.8.8.8
nameserver 127.0.1.1
options ndots:5
Regarding the first problem, I think it could be related to either SkyDNS nameservers misconfigurarion or a custom configuration that I have to do but I am not aware of.
However, I dont have any idea about why the containers cannot ping ClusterIPs.
Any workarounds?
Upvotes: 14
Views: 23385
Reputation: 1
I am using Azure AKS service, I am facing similar issue. I was able to ping the pod IPs, but not the service endpoint. As a workaround, I have used headless service whose endpoint is pingable.
[ root@curl:/ ]$ ping sfs-statefulset-0.headless-svc.myns.svc.cluster.local
PING sfs-statefulset-0.headless-svc.myns.svc.cluster.local (10.244.0.37): 56 data bytes
64 bytes from 10.244.0.37: seq=0 ttl=64 time=0.059 ms
64 bytes from 10.244.0.37: seq=1 ttl=64 time=0.090 ms
64 bytes from 10.244.0.37: seq=2 ttl=64 time=0.087 ms
^C
--- sfs-statefulset-0.headless-svc.myns.svc.cluster.local ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.059/0.078/0.090 ms
[ root@curl:/ ]$ ping headless-svc.myns.svc.cluster.local
PING headless-svc.myns.svc.cluster.local (10.244.0.36): 56 data bytes
64 bytes from 10.244.0.36: seq=0 ttl=64 time=0.051 ms
64 bytes from 10.244.0.36: seq=1 ttl=64 time=0.092 ms
64 bytes from 10.244.0.36: seq=2 ttl=64 time=0.098 ms
64 bytes from 10.244.0.36: seq=3 ttl=64 time=0.083 ms
Upvotes: 0
Reputation: 51
If the service used iptables to implement, then the clusterIp cannot ping, because iptables just only permit tcp packet. But when you curl clusterIP+port, then the iptables rule dnat this tcp packet to pod.
#ping 10.96.229.40
PING 10.96.229.40 (10.96.229.40) 56(84) bytes of data.
^C
--- 10.96.229.40 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 999ms
#iptables-save |grep 10.96.229.40
-A KUBE-SERVICES -d 10.96.229.40/32 -p tcp -m comment --comment "***-service:https has no endpoints" -m tcp --dport 8443 -j REJECT --reject-with icmp-port-unreachable
If the service used ipvs then you can ping clusterIP. but the response send by local loopback device, because kube-proxy add a route rule to lo
# ip route get 10.68.155.139
local 10.68.155.139 dev lo src 10.68.155.139
cache <local>
# ping -c 1 10.68.155.139
PING 10.68.155.139 (10.68.155.139) 56(84) bytes of data.
64 bytes from 10.68.155.139: icmp_seq=1 ttl=64 time=0.045 ms
--- 10.68.155.139 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms
Upvotes: 2
Reputation: 21
Another way to handle the same issue with DNS is to set upstream servers in a configMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
upstreamNameservers: |
["8.8.8.8", "8.8.4.4"]
Upvotes: 2
Reputation: 129
I can answer your ping clusterIP
problem.
I met the same problem, want to ping the service's cluster IP from Pod.
The resolution seems that the cluster IP cannot be pinged, but the endpoint can be access using curl with port.
I just work around to find details about ping virtual IP.
Upvotes: 12
Reputation: 601
I found a workaround. SkyDNS documentation in the commandline arguments section, and specifically, for "nameservers" argument implies that:
nameservers: forward DNS requests to these (recursive) nameservers (array of IP:port combination), when not authoritative for a domain. This defaults to the servers listed in /etc/resolv.conf
But it does not! To solve the problem, the dns addon replication controller configuration file (cluster/addons/dns/skydns-rc.yaml.in) should be changed to contain nameservers configuration. I changed the skydns container part as follows and it worked like a charm.
- name: skydns
image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
args:
# command = "/skydns"
- -machines=http://127.0.0.1:4001
- -addr=0.0.0.0:53
- -nameservers=8.8.8.8:53
- -ns-rotate=false
- -domain={{ pillar['dns_domain'] }}.
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 1
timeoutSeconds: 5
Upvotes: 0