Reputation: 107
I have one docker image named centos7_hadoop, I run this command create container
docker run --network hadoop-network --name hadoop1 --hostname hadoop1 -d -P centos7_hadoop
When I run it, I'm so happy, it seems like I enter a new host like hadoop1
docker exec -it hadoop1 /bin/bash
So I want to store my footprint, I write a line in /etc/hosts. Before
172.18.0.4 hadoop1
After
172.18.0.4 hadoop1
172.18.0.2 hadoop0
I'm so happy, it seems like I successfully! But when I restart container, I lost my data, it just has
172.18.0.4 hadoop1
So I commit change to centos7_hadoop:hadoop1
[root@hadoop1 /]# vi /etc/hosts //add one line 172.18.0.2 hadoop0
[root@hadoop1 /]# exit
exit
[lalala@localhost ~]$ docker commit a5e2c4a0a09f centos7_hadoop:hadoop1
I restart container, I think I can fly. But I failed. I use this command enter container
[lalala@localhost ~]$ docker run -it centos7_hadoop:hadoop1 /bin/bash
[root@8ef3b77807d6 /]# cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.3 8ef3b77807d6
It seems I enter a new container which has a new host name! I don't like it, I want enter container which host name is hadoop1! So I try this
[lalala@localhost ~]$ docker exec -it hadoop1 /bin/bash
[root@hadoop1 /]# cat /etc/hosts
...
172.18.0.4 hadoop1
Predictably, I failed. I check the container
a5e2c4a0a09f c143e0f071c1 "/usr/sbin/sshd -D" 15
hours ago Up About a minute 0.0.0.0:32779->22/tcp
hadoop1
15 hours ago! So, how can I get right data after restart container?
Upvotes: 0
Views: 76
Reputation: 158995
You should learn how to use Dockerfiles to build custom images. The Dockerfile system is pretty straightforward: if you write down what changes you made to the container after you started it up, and just put RUN
before each line, you're pretty close to a correct Dockerfile. Once you have that, you can docker build
the image whenever you need it, update it when security updates or other changes appear, put it in some relatively safe source control, and generally not have to worry about losing your custom hand-built image.
The IP addresses you're listing look suspiciously like other containers' IP addresses. Never look these up directly! If you launch multiple containers from the same docker-compose.yml
file, or directly create a custom Docker network and launch containers on it
docker network create hadoop
docker run -d --name hadoop1 --net hadoop centos7_hadoop:hadoop1
docker run -d --name hadoop2 --net hadoop centos7_hadoop:hadoop1
then the different containers will be able to use the container names hadoop1
and hadoop2
as DNS names. (Remember, if you delete and recreate the containers, their IP addresses might change, and deleting and recreating containers is pretty routine.) This saves you from ever having to directly worry about what those IP addresses might be.
If these IP addresses come from somewhere else, consider setting up a DNS server so you don't have to hand-maintain them. If you really must use /etc/hosts
, Docker actually controls its contents fairly directly. You can inject values there with docker run --add-host
. An /etc/hosts
file in an image will be ignored and overwritten at run time.
In my experience, neither docker commit
nor pushing values into /etc/hosts
are considered best practices: they lead to fragile, hard-to-reproduce setups.
Upvotes: 1