Kyle Chadha
Kyle Chadha

Reputation: 4161

How do you share volumes between Docker containers in an Elastic Beanstalk application?

I'm trying to share data between two Docker containers that are running in a multicontainer AWS EC2 instance.

Normally, I would specify the volume as a command flag when I ran the container, ie: docker run -p 80:80 -p 443:443 --link Widget:Widget --volumes-from Widget --name Nginx1 -d nginx1 to share a volume from Widget to Nginx1.

However, since Elastic Beanstalk requires you to specify your Docker configuration in a dockerrun.aws.json file, and then handles running your docker containers internally, I haven't been able to figure out how to share data volumes between containers.

Note that I'm not trying to share data from the EC2 instance into a Docker container -- this part seems to work fine; rather, I would like to share data directly from one Docker container to another. I know that docker container volumes are shared with the host at "/var/lib/docker/volumes/fac362...80535" etc., but since this location is not static I don't know how I would reference it in the dockerrun.aws.json file.

Has anyone found a solution or a workaround?

More info on dockerrun.aws.json and the config EB is looking for here: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html

Thanks!

Upvotes: 6

Views: 4220

Answers (3)

mcgoosh
mcgoosh

Reputation: 59

You would need to specify a dockerVolumeConfiguration in your Dockerrun.aws.json file. That's how we accomplished this. Which essentially gets pushed through to ECS.

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-volumes.html

Here is a snippet of my .json file

 "volumes": [
    {
      "name": "shared-data",
      "dockerVolumeConfiguration": {
          "scope": "shared",
          "driver": "local",
          "autoprovision": false
      }
    } 
  ],

  "containerDefinitions": [
    {
      "name": "api",
      "image": "account.dkr.ecr.us-west-2.amazonaws.com/api:latest",
      "essential": true,
      "memory": 128,
      "portMappings": [
        {
          "hostPort": 5000,
          "containerPort": 3050
        }
      ],
      "mountPoints": [
        {
          "sourceVolume": "shared-data",
          "containerPath": "/var/area_data"
        }
      ]
    },

Upvotes: 0

eliyahud
eliyahud

Reputation: 161

To accomplish what you want, you need to use the volumesFrom parameter correctly. You need to make sure to expose the volume with a VOLUME command for the container sharing its internal data.

Here's an example Dockerfile which I used to bundle some static files for serving via a webserver:

FROM tianon/true
COPY build/ /opt/static
VOLUME ["/opt/static"]

Now the relevant parts of the Dockerrun.aws.json:

{
    "name": "staticfiles",
    "image": "mystaticcontainer",
    "essential": false,
    "memory": "16"
},
{
    "name": "webserver,
    ...
    "volumesFrom" : [
        {
            "sourceContainer": "staticfiles"
        }
    ]
}

Note that you don't need any volumes entry in the root of the Dockerrun.aws.json file, since the volume is only shared between the two containers, and not persisted on the host. You also don't need any specific mountPoints key in the container definition holding the volume to be shared, as the container with volumesFrom automatically picks up all the volumes from the referred container. In this example, all the files in /opt/static in the staticfiles container will also be available to the webserver container at the same location.

Upvotes: 13

bilby91
bilby91

Reputation: 904

From the AWS docs I found this:

You can define one or more volumes on a container, and then use the volumesFrom parameter in a different container definition (within the same task) to mount all of the volumes from the sourceContainer at their originally defined mount points.

The volumesFrom parameter applies to volumes defined in the task definition, and those that are built into the image with a Dockerfile.

Upvotes: 2

Related Questions