Reputation: 99
I have looked around online and tried the obvious route (explained below) to remove an environmental variable from a docker image.
1 - I create a container from a modified ubuntu image using:
docker run -it --name my_container my_image
2 - I inspect the image and see the two environmental variables that I want to remove using:
docker inspect my_container
which yields:
...
"Env": [
"env_variable_1=abcdef",
"env_variable_2=ghijkl",
"env_variable_3=mnopqr",
...
3 - I exec into the container and remove the environmental variables via:
docker exec -it my_container bash
unset env_variable_1
unset env_variable_2
4 - I check to make sure the specified variables are gone:
docker inspect my_container
which yields:
...
"Env": [
"env_variable_3=mnopqr",
...
5 - I then commit this modified container as an image via:
docker commit my_container my_new_image
6 - And check for the presence of the deleted environmental variables via:
docker run -it --name my_new_container my_new_image
docker inspect my_new_container
which yields (drumroll please):
...
"Env": [
"env_variable_1=abcdef",
"env_variable_2=ghijkl",
"env_variable_3=mnopqr",
...
AKA the deleted variables are not carried through from the modified container to the new image in the docker commit
What am I missing out on here? Is unset
really deleting the variables? Should I use another method to remove these environmental variables or another/modified method to commit the container as an image?
PS: I've confirmed the variables first exist when inside the container via env
. I then confirmed they were not active using the same method after using unset my_variable
Thanks for your help!
Upvotes: 3
Views: 10942
Reputation: 486
I wrote a shell script ("proxy-capsule.sh), which I use in my Dockerfile. The script sets the HTTP proxy for the current bash session and executes the actual command afterwards:
#!/bin/bash
set -eo pipefail
export HTTP_PROXY="$1"
export HTTPS_PROXY="$1"
export NO_PROXY="$2"
export http_proxy="$1"
export https_proxy="$1"
export no_proxy="$2"
shift 2 # discard the first and second script argument
exec "$@" # use the remaining arguments to execute the actual command in the current environment
... and then in the Dockerfile ...
FROM postgres:14.6-alpine3.17
ARG HTTP_PROXY
ARG NO_PROXY
# Create app directory and upload proxy-capsule.sh script
RUN mkdir /app
COPY proxy-capsule.sh /app/proxy-capsule.sh
# Update package manager
RUN /app/proxy-capsule.sh $HTTP_PROXY $NO_PROXY apk update
This way, the http proxy is only set for the single bash session and you don't have to remove it on startup in the entrypoint script. In my opinion, this is the more accurate solution, since it results in a cleaner Docker image. If an http proxy is required in the container runtime environment, the environment variables can be specified using the -e
argument of the docker run
command.
The downside is, that you have to prefix all commands in your Dockerfile with RUN /app/proxy-capsule.sh $HTTP_PROXY $NO_PROXY
which require access to the internet.
Upvotes: 1
Reputation: 499
I personally was looking to remove all environment variables to have a fresh image but without losing the contents inside the image.
The problem was that when i reused this image and reset those environment variables with new values, they were not changed, the old values were still present.
My solution was to reinitialize the image with docker export
and then docker import
.
Export
First, spin up a container with the image, then export the container to a tarball
docker export container_name > my_image.tar
Import
Import the tarball to a new image
docker import my_image.tar my_image_tag:latest
Doing this will reset the image, meaning only the contents of the container will remain. All layers, environment variables, entrypoint, and command data will be gone.
Upvotes: 1
Reputation: 159475
You need to edit the Dockerfile that built the original image. The Dockerfile ENV
directive has a couple of different syntaxes to set variables but none to unset them. docker run -e
and the Docker Compose environment:
setting can't do this either. This is not an especially common use case.
Depending on what you need, it may be enough to set the variables to an empty value, though this is technically different.
FROM my_image
ENV env_variable_1=""
RUN test -z "$env_variable_1" && echo variable 1 is empty
RUN echo variable 1 is ${env_variable_1:-empty}
RUN echo variable 1 is ${env_variable_1-unset}
# on first build will print out "empty", "empty", and nothing
The big hammer is to use an entrypoint script to unset the variable. The script would look like:
#!/bin/sh
unset env_variable_1 env_variable_2
exec "$@"
It would be paired with a Dockerfile like:
FROM my_image
COPY entrypoint.sh /
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["same", "as", "before"]
docker inspect
would still show the variable as set (because it is in the container metadata) but something like ps e
that shows the container process's actual environment will show it unset.
As a general rule you should always use the docker build
system to create an image, and never use docker commit
. ("A modified Ubuntu image" isn't actually a reproducible recipe for debugging things or asking for help, or for rebuilding it when a critical security patch appears in six months.) docker inspect
isn't intrinsically harmful but has an awful lot of useless information; I rarely have reason to use it.
Upvotes: 4
Reputation: 189
Maybe you can try with this way, as in this answer:
docker exec -it -e env_variable_1 my_container bash
And then commit the container as usual.
Upvotes: 2