Reputation: 662
I created an EBS volume, attached and mounted it to my Container Instance. In the task definition volumes I set the volume Source Path with the mounted directory. The container data is not beeing created in the mounted directory, all other directories out of the mounted EBS works properly.
The purpose is to save the data out of the container and with this another volume backup it.
Is there a way to use this attached volume with my container? or is a better way to work with volumes and backups.
EDIT: It was tested with a random docker image running it specifying the volume and I faced the same problem. I manage to make it work restarting the Docker service but I'm still looking for a solution without restart Docker.
Inspecting a container with a volume directory that is the mounted EBS
"HostConfig": {
"Binds": [
"/mnt/data:/data"
],
...
"Mounts": [
{
"Source": "/mnt/data",
"Destination": "/data",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
the directory displays:
$ ls /mnt/data/
lost+found
Inspecting a container with a volume directory that is not the mounted EBS
"HostConfig": {
"Binds": [
"/home/ec2-user/data:/data"
],
...
"Mounts": [
{
"Source": "/home/ec2-user/data",
"Destination": "/data",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
]
the directory displays:
$ ls /home/ec2-user/data
databases dbms
Upvotes: 15
Views: 20962
Reputation: 675
I ended up adding the following commands to user_data.sh
instance_id=$(curl 169.254.169.254/latest/meta-data/instance-id)
volume_id=$(aws ec2 describe-volumes --region us-west-2 --filters Name=attachment.instance-id,Values=$instance_id | jq -r '.Volumes[0].VolumeId')
# This will grow EBS to 1024 GiB
aws ec2 modify-volume --region us-west-2 --size 1024 --volume-id $volume_id
sudo growpart /dev/nvme0n1 1
sudo xfs_growfs /dev/nvme0n1p1
Upvotes: 0
Reputation: 713
The current documentation on Using Data Volumes in Tasks seems to address this problem:
Prior to the release of the Amazon ECS-optimized AMI version 2017.03.a, only file systems that were available when the Docker daemon was started are available to Docker containers. You can use the latest Amazon ECS-optimized AMI to avoid this limitation, or you can upgrade the docker package to the latest version and restart Docker.
Upvotes: 0
Reputation: 18242
It sounds like what you potentially want to do is make use of the AWS EC2 Launch Configurations. Using Launch Configurations, you can specify EBS volumes be created and attached to your instance at launch. This happens prior to the docker agent and subsequent tasks being started.
As part of your launch configuration, you'll want to also update the User data under Configure details with something along the lines of:
mkdir /data;
mkfs -t ext4 /dev/xvdb;
mount /dev/xvdb /data;
echo '/dev/xvdb /data ext4 defaults,nofail 0 2' >> /etc/fstab;
Then, so long as your container is setup to access /data
on the host, everything will just work the first go.
Bonus: If you're using ECS clusters, I presume you're already making use of Launch Configurations to get your instances joined to the cluster. If not, you can add new instances automatically as well, using something like:
#!/bin/bash
docker pull amazon/amazon-ecs-agent
docker run --name ecs-agent --detach=true --restart=on-failure:10 --volume=/var/run/docker.sock:/var/run/docker.sock --volume=/var/log/ecs/:/log --volume=/var/lib/ecs/data:/data --volume=/sys/fs/cgroup:/sys/fs/cgroup:ro --volume=/var/run/docker/execdriver/native:/var/lib/docker/execdriver/native:ro --publish=127.0.0.1:51678:51678 --env=ECS_LOGFILE=/log/ecs-agent.log --env=ECS_AVAILABLE_LOGGING_DRIVERS=[\"json-file\",\"syslog\",\"gelf\"] --env=ECS_LOGLEVEL=info --env=ECS_DATADIR=/data --env=ECS_CLUSTER=your-cluster-here amazon/amazon-ecs-agent:latest
Specifically in that bit, you'll want to edit this part: --env=ECS_CLUSTER=your-cluster-here
Hope this helps.
Upvotes: 4