RndmSymbl
RndmSymbl

Reputation: 553

How to run minikube inside a docker container?

I intend to test a non-trivial Kubernetes setup as part of CI and wish to run the full system before CD. I cannot run --privileged containers and am running the docker container as a sibling to the host using docker run -v /var/run/docker.sock:/var/run/docker.sock

The basic docker setup seems to be working on the container:

linuxbrew@03091f71a10b:~$ docker run hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

However, minikube fails to start inside the docker container, reporting connection issues:

linuxbrew@03091f71a10b:~$ minikube start --alsologtostderr -v=7
I1029 15:07:41.274378    2183 out.go:298] Setting OutFile to fd 1 ...
I1029 15:07:41.274538    2183 out.go:345] TERM=xterm,COLORTERM=, which probably does not support color
...
...
...
I1029 15:20:27.040213     197 main.go:130] libmachine: Using SSH client type: native
I1029 15:20:27.040541     197 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1e20] 0x7a4f00 <nil>  [] 0s} 127.0.0.1 49350 <nil> <nil>}
I1029 15:20:27.040593     197 main.go:130] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I1029 15:20:27.040992     197 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:49350: connect: connection refused                                                  

This is despite the network being linked and the port being properly forwarded:

linuxbrew@51fbce78731e:~$ docker container ls
CONTAINER ID   IMAGE                                 COMMAND                  CREATED         STATUS         PORTS                                                                                                                                  NAMES
93c35cec7e6f   gcr.io/k8s-minikube/kicbase:v0.0.27   "/usr/local/bin/entr…"   2 minutes ago   Up 2 minutes   127.0.0.1:49350->22/tcp, 127.0.0.1:49351->2376/tcp, 127.0.0.1:49348->5000/tcp, 127.0.0.1:49349->8443/tcp, 127.0.0.1:49347->32443/tcp   minikube
51fbce78731e   7f7ba6fd30dd                          "/bin/bash"              8 minutes ago   Up 8 minutes                                                                                                                                          bpt-ci
linuxbrew@51fbce78731e:~$ docker network ls
NETWORK ID     NAME       DRIVER    SCOPE
1e800987d562   bridge     bridge    local
aa6b2909aa87   host       host      local
d4db150f928b   kind       bridge    local
a781cb9345f4   minikube   bridge    local
0a8c35a505fb   none       null      local
linuxbrew@51fbce78731e:~$ docker network connect a781cb9345f4 93c35cec7e6f
Error response from daemon: endpoint with name minikube already exists in network minikube

The minikube container seems to be alive and well when trying to curl from the host and even sshis responding:

mastercook@linuxkitchen:~$ curl https://127.0.0.1:49350
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to 127.0.0.1:49350 

mastercook@linuxkitchen:~$ ssh [email protected] -p 49350
The authenticity of host '[127.0.0.1]:49350 ([127.0.0.1]:49350)' can't be established.
ED25519 key fingerprint is SHA256:0E41lExrrezFK1QXULaGHgk9gMM7uCQpLbNPVQcR2Ec.
This key is not known by any other names

What am I missing and how can I make minikube properly discover the correctly working minikube container?

Upvotes: 3

Views: 3812

Answers (2)

rokpoto.com
rokpoto.com

Reputation: 10784

You can run minikube in docker in docker container. It will use docker driver.

docker run --name dind -d --privileged docker:20.10.17-dind 
docker exec -it dind sh
/ # wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
/ # mv minikube-linux-amd64 minikube
/ # chmod +x minikube 
/ # ./minikube start --force
...
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
/ # ./minikube kubectl -- run --image=hello-world
/ # ./minikube kubectl -- logs pod/hello

Hello from Docker!
...

Also, note that --force is for running minikube using docker driver as root which we shouldn't do according minikube instructions.

Upvotes: 2

RndmSymbl
RndmSymbl

Reputation: 553

Because minikube does not complete the cluster creation, running Kubernetes in a (sibling) Docker container favours kind.

Given that the (sibling) container does not know enough about its setup, the networking connections are a bit flawed. Specifically, a loopback IP is selected by kind (and minikube) upon cluster creation even though the actual container sits on a different IP in the host docker.

To correct the networking, the (sibling) container needs to be connected to the network actually hosting the Kubernetes image. To accomplish this, the procedure is illustrated below:

  1. Create a kubernetes cluster:
linuxbrew@324ba0f819d7:~$ kind create cluster --name acluster
Creating cluster "acluster" ...
 βœ“ Ensuring node image (kindest/node:v1.21.1) πŸ–Ό
 βœ“ Preparing nodes πŸ“¦  
 βœ“ Writing configuration πŸ“œ 
 βœ“ Starting control-plane πŸ•ΉοΈ 
 βœ“ Installing CNI πŸ”Œ 
 βœ“ Installing StorageClass πŸ’Ύ 
Set kubectl context to "kind-acluster"
You can now use your cluster with:

kubectl cluster-info --context kind-acluster

Thanks for using kind! 😊
  1. Verify if the cluster is accessible:
linuxbrew@324ba0f819d7:~$ kubectl cluster-info --context kind-acluster

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 127.0.0.1:36779 was refused - did you specify the right host or port?

3.) Since the cluster cannot be reached, retrieve the control planes master IP. Note the "-control-plane" addition to the cluster name:

linuxbrew@324ba0f819d7:~$ export MASTER_IP=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' acluster-control-plane)

4.) Update the kube config with the actual master IP:

linuxbrew@324ba0f819d7:~$ sed -i "s/^    server:.*/    server: https:\/\/$MASTER_IP:6443/" $HOME/.kube/config

5.) This IP is still not accessible by the (sibling) container and to connect the container with the correct network retrieve the docker network ID:

linuxbrew@324ba0f819d7:~$ export MASTER_NET=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.NetworkID}}{{end}}' acluster-control-plane)

6.) Finally connect the (sibling) container ID (which should be stored in the $HOSTNAME environment variable) with the cluster docker network:

linuxbrew@324ba0f819d7:~$ docker network connect $MASTER_NET $HOSTNAME

7.) Verify whether the control plane accessible after the changes:

linuxbrew@324ba0f819d7:~$ kubectl cluster-info --context kind-acluster
Kubernetes control plane is running at https://172.18.0.4:6443
CoreDNS is running at https://172.18.0.4:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

If kubectl returns Kubernetes control plane and CoreDNS URL, as shown in the last step above, the configuration has succeeded.

Upvotes: 2

Related Questions