Reputation: 181
I am trying to deploy a K8s cluster from scratch using Kelsey Hightower's Learn Kubernetes the Hard Way guide. In my case I am using Vagrant and VirtualBox.
Each of My Master and Workers have a DHCP network in eth0(10.0.2.x range) for pulling bits from the internet and a eth1 static range (10.10.10.x/24) for internal k8s communication.
[vagrant@master-1 ~]$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
worker-1 Ready <none> 32s v1.12.0 10.0.2.15 <none> CentOS Linux 7 (Core) 3.10.0-957.1.3.el7.x86_64 containerd://1.2.0-rc.0
worker-2 Ready <none> 2s v1.12.0 10.0.2.15 <none> CentOS Linux 7 (Core) 3.10.0-957.1.3.el7.x86_64 containerd://1.2.0-rc.0
I initially did not have the flags -node-ip="10.10.10.x
and -address="10.10.10.x"
setup.
Upon adding - I did remove the nodes and restart the kubelet service hopefully to register the nodes again however it seems to not want to update.
== Following is a sample of the kubelet config:
/var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.32.0.10"
podCIDR: "${POD_CIDR}"
resolvConf: "/run/systemd/resolve/resolv.conf"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
EOF
/etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--container-runtime=remote \\
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
--image-pull-progress-deadline=2m \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--network-plugin=cni \\
--node-ip="$NODE_IP"
--address="$NODE_IP"
--register-node=true \\
--v=2
and kube-api server:
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\
--enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--enable-swagger-ui=true \\
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
--etcd-servers=https://10.10.10.11:2379,https://10.10.10.12:2379 \\
--event-ttl=1h \\
--experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
--kubelet-https=true \\
--runtime-config=api/all \\
--service-account-key-file=/var/lib/kubernetes/service-account.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
--v=2
Also in vagrant I believe eth0 is the NAT device as I see the 10.0.2.15
ip assigned to all vms (master/slaves)
[vagrant@worker-1 ~]$ ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:75:dc:3d brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
valid_lft 84633sec preferred_lft 84633sec
inet6 fe80::5054:ff:fe75:dc3d/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:24:a4:c2 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.206/24 brd 192.168.0.255 scope global noprefixroute dynamic eth1
valid_lft 3600sec preferred_lft 3600sec
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:76:22:4a brd ff:ff:ff:ff:ff:ff
inet 10.10.10.21/24 brd 10.10.10.255 scope global noprefixroute eth2
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe76:224a/64 scope link
valid_lft forever preferred_lft forever
[vagrant@worker-1 ~]$
I guess the ask is how to update the internal-ip and external-ip post changes to the kubelet configuration
Upvotes: 14
Views: 31244
Reputation: 16836
For new clusters that are initialized using kubeadm
, the approach I am showing below is recommended way, as stated in the file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf:
Preferably, the user should use the .NodeRegistration.KubeletExtraArgs object in the configuration files instead.
The cluster has to initialized using this command
kubeadm init --config=/tmp/kubeadm-init-config.yaml
with this config file /tmp/kubeadm-init-config.yaml:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
########## NOTICE THIS ########## < < < < < < < < < <
node-ip: {{ internal_ip_address }}
patches:
directory: /tmp/kubelet-patches
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
clusterName: {{ domain_name }}
controlPlaneEndpoint: {{ control_plane_endpoint }}
networking:
dnsDomain: cluster.local
podSubnet: {{ pod_network_cidr }}
The same approach can be followed for Join, but with JoinConfiguration, though it is significantly more complex because of the need to generate the token and the CA hash manually.
Upvotes: 0
Reputation: 136
The above solutions not working for me, i had to add --node-ip directly to Exec start on each worker nodes
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
add node ip flag
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS --node-ip=10.0.0.11
then kubectl get node return the node Internal IPs
Upvotes: 0
Reputation: 171
What I did is similar to the answer from Michael, only I didn't edit /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
.
As suggested here it is stated (I emphasize some points)
The file that can contain user-specified flag overrides with
KUBELET_EXTRA_ARGS
is sourced from/etc/default/kubelet
(for DEBs), or/etc/sysconfig/kubelet
(for RPMs).KUBELET_EXTRA_ARGS
is last in the flag chain and has the highest priority in the event of conflicting settings
So rather than changing files that are meant to be automatically changed (maybe due to upgrades or the like) I changed /etc/sysconfig/kubelet
with KUBELET_EXTRA_ARGS='--node-ip 10.0.0.3'
(the options that one can use can be seen with kubelet --help
).
Then I did, as specified by other answers:
systemctl daemon-reload #actually not needed as per commentary below.
systemctl restart kubelet
kubectl get nodes -o wide
And the internal Ip was changed accordingly.
Upvotes: 11
Reputation: 9283
I edited /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
- adding the --node-ip flag
to KUBELET_CONFIG_ARGS
and restarted kubelet with:
systemctl daemon-reload
systemctl restart kubelet
And kubectl get nodes -o wide reported the new IP addresses immediately. It took a bit longer when I did it on the master - but it happened eventually.
Upvotes: 25