Paul
Paul

Reputation: 36319

Mounting directories on ECS via API?

This may be a serverfault thing, but since I'm trying to do it via API or otherwise programmatically, I'm going to assume the question is for StackOverflow until I'm told otherwise.

I'm trying to replace Deis with ECS in an application I'm working on. The application itself currently is able to spin up new apps (docker containers running web applications) on Deis by checking out source code from our private git repo using a deploy key, and then pushing said code to a Deis endpoint (which then handles creating and spinning up the docker containers and so on).

Deis has been fairly flakey, though, and so I'm exploring replacing it.

ECS seems a good fit, and by using the Buildstep Container I've successfully run a heroku-like deployment of code from my private repository at the command line using docker.

To do so I had to map my ssh key directory into the container as part of the run command:

docker run -d -v ~/.ssh:/root/.ssh -p 3000:3000 -e PORT=3000 -e GIT_REPO=private-repo-url.git tutum/buildstep /start web

Which is mostly fine, except for two things. The first is that I don't know the best way to do this when calling the task creation API. From my understanding, registering a task on ECS with volumes and mount points is possible, but the volume needs to be a volume on the ECS cluster host that is running the task (? I could use confirmation on this), which won't be known at task registration time. The only examples I could find used local file paths.

So, first question: How do I either inject my deployment key into the container, or reliably map a 'keys' directory that the container can attach on bootup

The other part of this is less of a concern, but ideally the keys wouldn't stay on every container once the git repo pull is done. I think the cleanest way to do this will depend a bit on how I accomplish the first question, but second question is How do I cleanup my keys so they don't remain on the containers after deployment is complete

Upvotes: 1

Views: 358

Answers (2)

mcheshier
mcheshier

Reputation: 745

Do you really need to mount a directory? If it's available in your region, EFS can be a good solution for having a generic file store for your ECS implementation.

Check out:

https://aws.amazon.com/blogs/compute/using-amazon-efs-to-persist-data-from-amazon-ecs-containers/

for a reference implementation, though it is overkill for what you're trying to do.

Upvotes: 1

Marc Young
Marc Young

Reputation: 4012

How do I either inject my deployment key into the container, or reliably map a 'keys' directory that the container can attach on bootup

I would do that in User-Data. User-data for ECS is required anyway so that the instance can join the cluster. Have this key available somewhere (s3, whatever) and have it pull it down on startup. Say this goes into /opt/foo . Your task definition can map /opt/foo from the host without issue since every host in the cluster has and is using it.

How do I cleanup my keys so they don't remain on the containers after deployment is complete

This is a separate issue. If your containers stop/start at all, that key needs to be available again correct? If so you can't clean them up since that task might start up on a different instance at any time. If you only need it once then you can have your startup CMD for your docker container run a command to clean it up once deployment is done. This sounds fragile however. Why not use a read-only account and provide a user/password instead of using git+ssh ? You could provide this via env vars and not need to do half of this.

Upvotes: 1

Related Questions