Martijn Van Loocke
Martijn Van Loocke

Reputation: 533

chmod in Dockerfile does not permanently change permissions

I have the below Dockerfile to create a nodejs server. It clones a repo that contains the script that starts the node servers but that script needs to be executable.

The first time I start this docker container, it works as expected. I know the chown is doing something because without it, the script "does not exist" because it does not have the x permissions. However, if I then open a TTY into the container and inspect the file, the permissions have not changed. If the container is restarted, it gives an error that the file does not exist (x permission is missing). If I execute the chmod manually after the first run, I can restart the container without issues.

I have tried it as part of the combined RUN command (as below) and as a separate RUN command, both written normally and as RUN ["chmod", "755" "/opt/youtube4me/start_all.sh"] and both with 755 and +x. All of those ways work the first start but do not permanently change the permissions.

I'm very new to docker and I think I'm missing some basic concept that explains that chmod is run in some sort of context (maybe?)

FROM node:10.19.0-alpine
RUN apk update && \
    apk add git && \
    git clone https://github.com/0502-crew/youtube4me.git /opt/youtube4me && \
    chmod 755 /opt/youtube4me/start_all.sh
COPY config/. /opt/youtube4me/
EXPOSE 45011
WORKDIR /opt/youtube4me
CMD ["/opt/youtube4me/start_all.sh"]

UPDATE 1 I run these commands with docker:

start_all.sh

#!/bin/sh

# Reset to master in case any force pushes were done
git reset --hard origin/master
# Start client server
cd client && npm i && npm run build && nohup npm run prod &
# Start api server
cd api && npm i && npm run build && nohup npm run prod &
# Keep server running by tailing nothing, forever...
tail -f /dev/null

The config folder from which I COPY just contains 2 .ts files that are in .gitignore so need to be added manually:

UPDATE 2

If I replace the CMD line with CMD ["sh", "-c", "tail -f /dev/null"] then the x permissions remain on the sh file. It seems that executing the script removes the permissions...

Upvotes: 1

Views: 8356

Answers (2)

Martijn Van Loocke
Martijn Van Loocke

Reputation: 533

As @dennisvandehoef pointed out, the fundamentals of my approach are off. Including a git clone in the Dockerfile will, for example, have as side effect that building the image a second time will not clone the repo again, even if files have changed.

However, I did find a solution to this approach so I might as well share it for the sake of knowledge sharing:

I develop on Windows so I don't need to set permissions to a script but Linux does need it. I found out you can still set the +x permission bit using git on Windows and commit it. When docker clones the repo, chmod is then no longer needed at all.

git update-index --chmod=+x start_all.sh

If you then ls the files with this command

git ls-files --stage

you will see that all other files are 100644 but the sh file is 100755 (so permission 755).

Next just commit it

git commit -m"Made sh executable"

I had to delete my docker images afterwards and rebuild it to ensure the git clone was performed again. I'll rewrite my dockerfile at a later point but for now it works as intented.

It is still a mystery to me that the x bit disappears when the chmod is part of the Dockerfile

Upvotes: 4

You are building a docker-file for the project you are pulling from git and your start_all script includes a git reset --hard origin/master which is a bad practice since now you cannot create versions of your docker images.

Also you copy these 2 files to the wrong directory. with COPY config/. /opt/youtube4me/ you copy them directly to the rood of your project. and not to the given locations in config/api/src/config/ and config/client/src/config/Config.ts

I realize that fixing these problems will not fix this chmod problem itself. But it might make it go away for your specific use case.

Also if it are secrets you are excluding from git, you also should not add them to your docker-image while building since then (after you pushed it to docker) it will be public again. Therefore it is a good practice to never add them while building.

Did you try something like this:

Docker file

FROM node:10.19.0-alpine

RUN mkdir /opt/youtube4me    
COPY . /opt/youtube4me/
WORKDIR /opt/youtube4me

RUN chmod 755 /opt/youtube4me/start_all.sh

EXPOSE 45011
CMD ["/opt/youtube4me/start_all.sh"]

dockerignore

api/src/config/Config.ts
client/src/config/Config.ts

script

# Start client server
cd client && npm i && npm run build && nohup npm run prod &
# Start api server
cd api && npm i && npm run build && nohup npm run prod &
# Keep server running by tailing nothing, forever...
tail -f /dev/null

You then need to run docker run with 2 volumes to add the config to it. This is been done with -v. example: docker run -v $(pwd)config/api/src/config/:api/src/config/ -v $(pwd)config/client/src/config/:client/src/config/

Never the less I wonder why you are running both services in one docker image. If this is just for local execution you can also think about creating 2 separate docker images, and use docker-compose to spawn it all.

Addition 1:

I thought a bit about this and it is also a good practice to use environment variables to config a docker-image instead of adding a file to it. You might want to switch to that as well. I advise reading this article to get a better understanding of the why and other possibilities to do this.

Addition 2: I created a pull request on your code, that is an example of how it could look with docker-compose. It currently does not build because of the config options. But it will give you some more insights.

Upvotes: 1

Related Questions