atline
atline

Reputation: 31574

Possible to re-enter into an existed issued docker container which already down?

Background:

I need to develop a Dockerfile.

But before that, I want to choose a base image, and start a container base on this base image.

After the container start, I would like to try install some packages & also modify some configures of my service. If manually done everything ok, finally I could move all these steps: package install & configure settings (which proved to work by manual try) to my Dockerfile.

Problem:

Sometimes, after I have tried a lots of correct steps, then make a wrong configure. What's the worse, I stop the container & start the container again to prove the start script is ok or not.

But unfortunately, the container already cannot be started because wrong app configure in container.

In fact, I want to try other configure in container, and then maybe everything will be ok, but I nolonger have the chance to enter into the container again. I don't want to resetup a new container, because I have already do many manual things in old container (Haven't move the correct steps to Dockerfile because I'm in development phase, I want to do it after I prove everything is ok)

Next is the minimal example to show my case:

Dockerfile:

FROM alpine

ADD ./docker-entrypoint.sh .
RUN chmod 777 ./docker-entrypoint.sh

ENTRYPOINT ["./docker-entrypoint.sh"]

docker-entrypoint.sh:

#!/bin/sh

touch /tmp/app.log
tail -f /tmp/app.log

What I will do with above:

docker build --no-cache -t try .
docker run -idt --name me try
docker exec -it me /bin/sh

Change docker-entrypoint.sh to next:

#!/bin/sh

exit 0
touch /tmp/app.log
tail -f /tmp/app.log

Then:

docker stop me
docker start me # the container will not start

And now, I know exit 0 maybe wrong command or wrong configure, I want to try again with other things, but no chance.

Again, want to note:

I do not want to resetup a container because I've already do many things in manual in old container, not just exit 0.

And, supervisord also not what I needed in my development lifecycle, I just want to make things simple. Something like change entrypoint for an already existed container(Seems just work for docker run)

Any suggestion for me to ease my development phase?

Upvotes: 1

Views: 171

Answers (2)

David Maze
David Maze

Reputation: 158847

Just write your Dockerfile. Edit, run docker build, repeat.

There's nothing especially magical about a Dockerfile. If you take your approach but decide to take detailed notes in a text file: I want to start FROM some base image, then RUN some command, COPY some files in, and RUN some other command: that's a Dockerfile. It has the advantage of always starting from a clean environment, and always being reproducible.

For example, say you're trying to compile some package by hand, but don't quite have the configure options right.

FROM ubuntu:18.04
RUN apt-get update && apt-get install build-essential
WORKDIR /package
COPY some-package.tar.gz ./
RUN tar xzf some-package.tar.gz
WORKDIR /package/some-package
RUN ./config --wrong-option
RUN make
RUN make install
CMD ["some-command"]

Now you can run docker build . and it will run through this sequence of commands. Perhaps when it reaches RUN ./config it will fail (because the script is actually named ./configure). You can edit the Dockerfile and re-run docker build, and Docker will start over where it failed before. Similarly, when you discover that --wrong-option is wrong, you can change it and docker build will restart from the changed line.

If you need to do further debugging on a broken stage (maybe --wrong-option makes it through the configuration step but building fails) the docker build output includes an image ID for each layer, and you can docker run --rm -it 0123456789ab sh to get a shell on the partial image before the step that's having problems.

There are optimizations like combining RUN lines together and multi-stage builds that are useful, but you can save them for last.

docker exec isn't intended to be the primary way you interact with Docker. You'll run into exactly the problems you're encountering: there's no persistent record of what you've done, it's not especially reproducible, and if the container ever gets deleted you'll lose all your work.

Upvotes: 1

Edward Aung
Edward Aung

Reputation: 3512

Interesting. Basically, you now have a container that runs a script that exit immediately. Yet, you want to save it.

How about trying the followings:

  1. Commit your container as another image.

    docker commit me me/snap:v001

  2. Run bash using the image.

    docker run -it --name me2 me/snap:v001 bash

  3. Fix your entrypoint code.

Upvotes: 1

Related Questions