Jeff
Jeff

Reputation: 463

kubernetes: Call command in another containers which are in same pod

Is there any approach that one container can call command in another container? The containers are in the same pod.

I need many command line tools which are shipped as image as well as in packages. But I don’t want to install all of them into one container because of some concerns.

Upvotes: 10

Views: 9935

Answers (5)

koehn
koehn

Reputation: 804

You can do this without shareProcessNamespace by using a shared volume and some named pipes. It manages all the I/O for you and is trivially simple and extremely fast.

For a complete description and code, see this solution I created. Contains examples.

Upvotes: 3

sudo
sudo

Reputation: 1658

We are currently running EKS v1.20 and we were able to achieve this using the shareProcessNamespace: true that mr haven mention. In our particular case, we needed a debian 10 php container to execute a SAS binary command with arguments. SAS is installed and running in a centos 7 container in the same pod. Using helm, we enabled shareProcessNamespace and in the container's arguments and command fields we built symlinks to that binary using bash -c once the pod came online. We grabbed the pid of the shared container by using pgrep and since we know that the centos container's entry point is tail -f /dev/null so we just look for that process $(pgrep tail) initially.

- image: some_php_container
         command: ["bash", "-c"]
         args: [ "SAS_PROC_PID=$(pgrep tail) && \
                  ln -sf /proc/$SAS_PROC_PID/root/usr/local/SAS/SAS_9.4/SASFoundation/9.4/bin/sas_u8 /usr/bin/sas && \
                  ln -sf /proc/$SAS_PROC_PID/root/usr/local/SAS /usr/local/SAS && \
                  . /opt/script_runner.sh" ]

Now the php container is able to execute the sas command with arguments and process data files using the SAS software running on the centos container.

One issue we quickly found out is if the resulting SAS container happened to die in the pod, the pid would change and thus the symlinks would be broken on the php container. So we just put in liveness probe to frequently check to see if the path to binary using current pid exist, if the probe fails, it restarts the php container and thus rebuilding the symlinks with the right pid.

livenessProbe:
    exec:
      command:
      - bash
      - -c
      - SAS_PROC_PID=$(pgrep tail)
      - test -f /proc/$SAS_PROC_PID/root/usr/local/SAS/SAS_9.4/SASFoundation/9.4/bin/sas_u8
    initialDelaySeconds: 5
    periodSeconds: 5
    failureThreshold: 1

Hopefully above info can help someone else.

Upvotes: 2

mr haven
mr haven

Reputation: 1644

This is very possible as long as you have k8s v1.17+. You must enable shareProcessNamespace: true and then all the container processes are available to other containers in the same pod.

Here are the docs, have a look.

Upvotes: 7

Amit
Amit

Reputation: 61

Containers in pod are isolated from each other except that they share volume and network namespace. So you would not be able to execute command from one container into another. However, you could expose the commands in container through APIs

Upvotes: 2

David Maze
David Maze

Reputation: 158656

In general, no, you can't do this in Kubernetes (or in plain Docker). You should either move the two interconnected things into the same container, or wrap some sort of network service around the thing you're trying to call (and then probably put it in a separate pod with a separate service in front of it).

There might be something you could do if you set up a service account, installed a Kubernetes API sidecar container, and used the Kubernetes API to do the equivalent of kubectl exec, but I'd consider this a solution of last resort.

Upvotes: 3

Related Questions