Reputation: 157
In the following pod yaml, I cannot get source
command to work. Initially I inserted the command under args
between echo starting
and echo done
and now I tried {.lifecycle.postStart}
to no avail.
apiVersion: v1
kind: Pod
metadata:
name: mubu62
labels:
app: mubu62
spec:
containers:
- name: mubu621
image: dockreg:5000/mubu6:v6
imagePullPolicy: Always
ports:
- containerPort: 5021
command: ["/bin/sh","-c"]
args:
- echo starting;
echo CONT1=\"mubu621\" >> /etc/environment;
touch /mubu621;
sed -i 's/#Port 22/Port 5021/g' /etc/ssh/sshd_config;
sleep 3650d;
echo done;
lifecycle:
postStart:
exec:
command: ["/bin/bash","-c","source /etc/environment"]
- name: mubu622
image: dockreg:5000/mubu6:v6
imagePullPolicy: Always
ports:
- containerPort: 5022
imagePullSecrets:
- name: regcred
nodeName: spring
restartPolicy: Always
Kubectl apply
throws no errors, but echo $CONT1
returns nada! mubu6
is an ubuntu modified image.
The reason I am doing this, is because when I ssh
from another pod in this pod (mubu621)
, Kubernetes environment variables set through env
are not seen in the ssh
session.
Any help would be much appreciated!
Upvotes: 1
Views: 1889
Reputation: 157
After experimenting with the suggestions under set-environment-variable-automatically-upon-ssh-login, what worked was to substitute
echo CONT1=\"mubu621\" >> /etc/environment;
with
echo CONT1=\"mubu621\" >> /root/.bashrc;
and delete
lifecycle:
postStart:
exec:
command: ["/bin/bash","-c","source /etc/environment"]
that didn't work anyway.
Upon SSH-ing from container mubu622
to container mubu621
, I can now successfully execute echo $CONT1
with mubu621
output, without having to source
/root/.bashrc
first, which was initially the case with writing the env_variable
in /etc/environment
.
In summary: when using a bash shell
in kubernetes containers
, you can SSH
from another container and echo
variables written in /root/.bashrc
without sourcing (because kubernetes env_variables
are not available in a ssh session).
This is very useful e.g in the case of multi-container pods, so you know amongst other things in which container you are currently logged in.
Upvotes: 2
Reputation: 952
Move the env variable into env
section of your pod spec:
apiVersion: v1
kind: Pod
metadata:
name: mubu62
labels:
app: mubu62
spec:
containers:
- name: mubu621
image: dockreg:5000/mubu6:v6
imagePullPolicy: Always
ports:
- containerPort: 5021
command: ["/bin/sh","-c"]
env:
- name: CONT1
value: mubu621
As one of the comments already indicated - your source
command probably works, but only in the context where it's executed. If you'd like this to be applied for other commands - use the env
field of the container spec. Consider this minimalistic example, using busybox:
apiVersion: v1
kind: Pod
metadata:
name: busybox
labels:
app: busybox
spec:
containers:
- name: busybox
image: busybox
command: ["/bin/sh", "-ec", "sleep 1000"]
env:
- name: TEST_ENV
value: "test_val"
with this - when you run the env
command inside the pod - you'll see the TEST_ENV
appear as expected:
$ kubectl exec -it busybox-6d467f94db-sj9nz env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=busybox-6d467f94db-sj9nz
TERM=xterm
TEST_ENV=test_val
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.96.0.1:443
HOME=/root
Read more about env variables in pods in Kubernetes docs
Upvotes: 1