Reputation: 6122
In a container inside a pod, how can I run a command using kubectl? For example, if i need to do something like this inside a container:
kubectl get pods
I have tried this : In my dockerfile, I have these commands :
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN sudo mv ./kubectl /usr/local/bin/kubectl
EDIT : I was trying the OSX file, I have corrected it to the linux binary file. (corrected by @svenwltr
While creating the docker file, this is successful, but when I run the kubectl get pods inside a container,
kubectl get pods
I get this error :
The connection to the server : was refused - did you specify the right host or port?
When I was deploying locally, I was encountering this error if my docker-machine was not running, but inside a container how can a docker-machine be running?
Locally, I get around this error by running the following commands: (dev is the name of the docker-machine)
docker-machine env dev
eval $(docker-machine env dev)
Can someone please tell me what is it that I need to do?
Upvotes: 73
Views: 88613
Reputation: 448
kubectl --exec -it <pod-name> -- <command-name>
kubectl --exec -it <pod-name> -c <container-name> -- <command-name>
Upvotes: -6
Reputation: 41
To run kubectl commands inside a container. It would take 3 steps
RUN printf '[kubernetes] \nname = Kubernetes\nbaseurl = https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64\nenabled = 1\ngpgcheck = 1\nrepo_gpgcheck=1\ngpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg' \
| tee /etc/yum.repos.d/kubernetes.repo \
&& cat /etc/yum.repos.d/kubernetes.repo \
&& yum install -y kubectl
apiVersion: v1
kind: ServiceAccount
metadata:
name: mysa-admin-sa
namespace: mysa
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: mysa-admin-sa
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: mysa-admin-sa
namespace: mysa
3- Example of cronjob configuration
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: scaleup
namespace: myapp
spec:
schedule: "00 5 * * 1-5"
jobTemplate:
spec:
template:
spec:
serviceAccountName: mysa-admin-sa
restartPolicy: OnFailure
containers:
- name: scale-up
image: myimage:test
imagePullPolicy: Always
command: ["/bin/sh"]
args: ["-c", "mykubcmd_script >>/mylog.log"]
Upvotes: 1
Reputation: 5497
Combined from all above. This did the trick for me. Retrieving all pods from within a container.
curl --insecure -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT/api/v1/namespaces/default/pods
See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#-strong-read-operations-pod-v1-core-strong- for the REST API.
Upvotes: 3
Reputation: 146
I just faced this concept again. It is absolutely possible but let's don't give "cluster-admin privileges in with ClusterRole that container for security reasons.
Let's say we want to deploy a pod in the cluster with access to view and create pods only in a specific namespace in the cluster. In this case, a ServiceAccount
could look like:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: spinupcontainers
subjects:
- kind: ServiceAccount
name: spinupcontainers
namespace: <YOUR_NAMESPACE>
roleRef:
kind: Role
name: spinupcontainers
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: spinupcontainers
# "namespace" omitted if was ClusterRoles because are not namespaced
namespace: <YOUR_NAMESPACE>
labels:
k8s-app: <YOUR_APP_LABEL>
rules:
#
# Give here only the privileges you need
#
- apiGroups: [""]
resources:
- pods
verbs:
- create
- update
- patch
- delete
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: spinupcontainers
namespace: <MY_NAMESPACE>
labels:
k8s-app: <MY_APP_LABEL>
---
If you apply the service account in your deployment with serviceAccountName: spinupcontainers
in the container specs you don't need to mount any additional volumes secrets or attach manually certifications. kubectl client will get the required tokens from /var/run/secrets/kubernetes.io/serviceaccount
. Then you can test if is working with something like:
$ kubectl exec -it <your-container-with-the-attached-privs> -- /kubectl get pods -n <YOUR_NAMESPACE>
NAME. READY STATUS RESTARTS AGE
pod1-0 1/1 Running 0 6d17h
pod2-0 1/1 Running 0 6d16h
pod3-0 1/1 Running 0 6d17h
pod3-2 1/1 Running 0 67s
or permission denied:
$ kubectl exec -it <your-container-with-the-attached-privs> -- /kubectl get pods -n kube-system
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:spinupcontainers" cannot list resource "pods" in API group "" in the namespace "kube-system"
command terminated with exit code 1
Tested on:
$ kubectl exec -it <your-container-with-the-attached-privs> -- /kubectl versionClient Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Upvotes: 9
Reputation: 898
Bit late to the party here, but this is my two cents:
I've found using kubectl
within a container much easier than calling the cluster's api
(Why? Auto authentication!)
Say you're deploying a Node.js project that needs kubectl
usage.
kubectl
inside the containerkubectl
to your containerkubectl
provides a rich cli for managing your kubernetes cluster--- EDITS ---
After working with kubectl
in my cluster pods, I found a more effective way to authenticate pods to be able to make k8s API calls. This method provides stricter authentication.
ServiceAccount
for your pod, and configure your pod to use said account. k8s Service Account docsRoleBinding
or ClusterRoleBinding
to allow services to have the authorization to communicate with the k8s API. k8s Role Binding docsWhen you're done, you will have the following:
ServiceAccount
, ClusterRoleBinding
, Deployment
(your pods)
Feel free to comment if you need some clearer direction, I'll try to help out as much as I can :)
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: k8s-101
spec:
replicas: 3
template:
metadata:
labels:
app: k8s-101
spec:
serviceAccountName: k8s-101-role
containers:
- name: k8s-101
imagePullPolicy: Always
image: salathielgenese/k8s-101
ports:
- name: app
containerPort: 3000
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: k8s-101-role
subjects:
- kind: ServiceAccount
name: k8s-101-role
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-101-role
The salathielgenese/k8s-101
image contains kubectl
. So one can just log into a pod container & execute kubectl
as if he was running it on k8s host: kubectl exec -it pod-container-id -- kubectl get pods
Upvotes: 43
Reputation: 39277
I would use kubernetes api, you just need to install curl, instead of kubectl
and the rest is restful.
curl http://localhost:8080/api/v1/namespaces/default/pods
Im running above command on one of my apiservers. Change the localhost to apiserver ip address/dns name.
Depending on your configuration you may need to use ssl or provide client certificate.
In order to find api endpoints, you can use --v=8
with kubectl
.
example:
kubectl get pods --v=8
Resources:
Kubernetes API documentation
Update for RBAC:
I assume you already configured rbac, created a service account for your pod and run using it. This service account should have list permissions on pods in required namespace. In order to do that, you need to create a role and role binding for that service account.
Every container in a cluster is populated with a token that can be used for authenticating to the API server. To verify, Inside the container run:
cat /var/run/secrets/kubernetes.io/serviceaccount/token
To make request to apiserver, inside the container run:
curl -ik \
-H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
https://kubernetes.default.svc.cluster.local/api/v1/namespaces/default/pods
Upvotes: 49
Reputation: 18442
/usr/local/bin/kubectl: cannot execute binary file
It looks like you downloaded the OSX binary for kubectl
. When running in Docker you probably need the Linux one:
https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
If you run kubectl
in a proper configured Kubernetes cluster, it should be able to connect to the apiserver.
kubectl
basically uses this code to find the apiserver and authenticate: github.com/kubernetes/client-go/rest.InClusterConfig
This means:
KUBERNETES_SERVICE_HOST
and KUBERNETES_SERVICE_PORT
.var/run/secrets/kubernetes.io/serviceaccount/token
./var/run/secrets/kubernetes.io/serviceaccount/ca.crt
.This is all data kubectl
needs to know to connect to the apiserver.
Some thoughts why this might won't work:
spec.serviceAccountName
).Upvotes: 21