Reputation: 345
I try to run a container with ENTRYPOINT /sbin/init
to allow me to run systemctl
command.
After that I use CMD sh-exports.sh
to execute the some command from the script.
Dockerfile
FROM registry.redhat.io/ubi8/ubi-init
ADD sh-exports.sh /
ARG S3FS_VERSION=v1.86
ARG MNT_POINT=/var/s3fs
ENV MNT_POINT=${MNT_POINT}
RUN yum install somepackages -y && mkdir -p "$MNT_POINT" && chmod 755 "$MNT_POINT" && chmod 777 /sh-exports.sh
ENTRYPOINT [ "/sbin/init", "$@" ]
CMD "/sh-exports.sh"
sh-exports.sh
#!/bin/bash
echo $EXPORTS > /etc/exports
systemctl restart some.services
sleep infinity
The script sh-exports.sh
is not executed.
I can actually login to the container and simply run sh /sh-exports.sh
and the script can run just fine.
So, is there anyway to allow me to use ENTRYPOINT /sbin/init
and then any command at the CMD
?
Upvotes: 1
Views: 2471
Reputation: 39284
Yes, this is by design, and the way to run your command is to have an entrypoint that is able to receive the command as an argument.
Your actual trial is the opposite of what is commonly done, you usually have a command passed to an entrypoint and this command is the one "keeping the container alive".
So, the Dockerfile should have those instructions:
COPY entrypoint.sh entrypoint.sh
CMD ["sleep", "infinity"]
ENTRYPOINT ["entrypoint.sh"]
And this entrypoint.sh should be:
#!/usr/bin/env bash
/sbin/init
echo $EXPORTS > /etc/exports
systemctl restart some.services
exec "$@"
If you want to be able to alter the instruction passed to systemctl
in the docker command, like
docker run my-container other-service
What you can do instead is
COPY entrypoint.sh entrypoint.sh
CMD ["some.services"]
ENTRYPOINT ["entrypoint.sh"]
And then, the entrypoint will look like:
#!/usr/bin/env bash
/sbin/init
echo $EXPORTS > /etc/exports
systemctl restart "$@"
sleep infinity
The reason for all this is, when using a command in combination with an entrypoint, the command is passed as an argument to the entrypoint, so, the responsibility to execute the command (or not) is actually delegated to the entrypoint.
So there is nothing "magic" happening really, it is like you would normally do in a shell script, one script (the entrypoint) receives arguments (the command) and can execute the received arguments as a command, or as part of a command.
This table actually explain the different behaviours:
No ENTRYPOINT | ENTRYPOINT exec_entry p1_entry | ENTRYPOINT [“exec_entry”, “p1_entry”] | |
---|---|---|---|
No CMD | error, not allowed | /bin/sh -c exec_entry p1_entry | exec_entry p1_entry |
CMD [“exec_cmd”, “p1_cmd”] | exec_cmd p1_cmd | /bin/sh -c exec_entry p1_entry | exec_entry p1_entry exec_cmd p1_cmd |
CMD [“p1_cmd”, “p2_cmd”] | p1_cmd p2_cmd | /bin/sh -c exec_entry p1_entry | exec_entry p1_entry p1_cmd p2_cmd |
CMD exec_cmd p1_cmd | /bin/sh -c exec_cmd p1_cmd | /bin/sh -c exec_entry p1_entry | exec_entry p1_entry /bin/sh -c exec_cmd p1_cmd |
Source: https://docs.docker.com/engine/reference/builder/#understand-how-cmd-and-entrypoint-interact
Upvotes: 4