Reputation: 1151
I have created a Kind cluster with containerd runtime. Here is my node:
root@dev-001:~# k get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
local-cluster-control-plane Ready control-plane,master 7d8h v1.20.2 172.18.0.2 <none> Ubuntu 20.10 5.4.0-81-generic containerd://1.4.0-106-gce4439a8
local-cluster-worker Ready <none> 7d8h v1.20.2 172.18.0.5 <none> Ubuntu 20.10 5.4.0-81-generic containerd://1.4.0-106-gce4439a8
local-cluster-worker2 Ready <none> 7d8h v1.20.2 172.18.0.3 <none> Ubuntu 20.10 5.4.0-81-generic containerd://1.4.0-106-gce4439a8
local-cluster-worker3 Ready <none> 7d8h v1.20.2 172.18.0.4 <none> Ubuntu 20.10 5.4.0-81-generic containerd://1.4.0-106-gce4439a8
How I can ssh into nodes?
Kind version: 0.11.1 or greater
Runtime: containerd ( not docker )
Upvotes: 6
Views: 11492
Reputation: 3214
Kind Kuberenetes uses Docker to create container(s) which will be Kubernetes node(s):
kind is a tool for running local Kubernetes clusters using Docker container “nodes”.
So basically the layers are: your host -> containers hosted on yours host's docker which are acting as Kubernetes nodes -> on nodes there are container runtimes used for running pods
In order to SSH into nodes you need to exec into docker containers. Let's do it.
First, we will get list of nodes by running kubectl get nodes -o wide
:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kind-control-plane Ready control-plane,master 5m5s v1.21.1 172.18.0.2 <none> Ubuntu 21.04 5.11.0-1017-gcp containerd://1.5.2
kind-worker Ready <none> 4m38s v1.21.1 172.18.0.4 <none> Ubuntu 21.04 5.11.0-1017-gcp containerd://1.5.2
kind-worker2 Ready <none> 4m35s v1.21.1 172.18.0.3 <none> Ubuntu 21.04 5.11.0-1017-gcp containerd://1.5.2
Let's suppose we want to SSH into kind-worker
node.
Now, we will get list of docker containers (docker ps -a
) and check if all nodes are here:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7ee204ad5fd1 kindest/node:v1.21.1 "/usr/local/bin/entr…" 10 minutes ago Up 8 minutes kind-worker
434f54087e7c kindest/node:v1.21.1 "/usr/local/bin/entr…" 10 minutes ago Up 8 minutes 127.0.0.1:35085->6443/tcp kind-control-plane
2cb2e9465d18 kindest/node:v1.21.1 "/usr/local/bin/entr…" 10 minutes ago Up 8 minutes kind-worker2
Take a look at the NAMES
column - here are nodes names used in Kubernetes.
Now we will use standard docker exec
command to connect to the running container and connect to it's shell - docker exec -it kind-worker sh
, then we will run ip a
on the container to check if IP address matches the address from the kubectl get nodes
command:
# ls
bin boot dev etc home kind lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
...
11: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
inet 172.18.0.4/16 brd 172.18.255.255 scope global eth0
...
#
As can see, we successfully connected to the node used by Kind Kubernetes - the IP address 172.18.0.4
matches the IP address from the kubectl get nodes
command.
Upvotes: 11
Reputation: 142014
A simple google search will reveal the answer:
https://cloud.google.com/anthos/clusters/docs/on-prem/1.3/how-to/ssh-cluster-node
kubectl --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] get secrets \
-n [USER_CLUSTER_NAME] ssh-keys \
-o jsonpath='{.data.ssh\.key}' | base64 -d > \
~/.ssh/[USER_CLUSTER_NAME].key \
&& chmod 600 ~/.ssh/[USER_CLUSTER_NAME].key
where:
[ADMIN_CLUSTER_KUBECONFIG]
is the path of your admin cluster's kubeconfig file.
[USER_CLUSTER_NAME]
is the name of your user cluster.
ssh.key
field of a Secret named ssh-keys in the [USER_CLUSTER_NAME] namespace.ssh -i ~/.ssh/[USER_CLUSTER_NAME].key user@[NODE_IP]
where:
[NODE_IP]
is the internal IP address of a node in your user cluster, which you gathered previously.Upvotes: -1