user3302174
user3302174

Reputation: 1

Container delivery on amazon ecs

I’m using Amazon ECS to auto deploy my containers on uat/production. What is the best way to do that?

I have a REST api with a several front-end clients

Should I package my api container with nginx in the same container? And do the same thing with the others front end clients.

Or I have to write a big task definition to bring together all my containers(db, nginx, php, api, clients) :(, but that's mean that I should redeploy all my infrastructure at each push uat/prod

I'm very confusing.

Upvotes: 0

Views: 359

Answers (3)

Shibashis
Shibashis

Reputation: 8401

Here is my two cents on the topic, the question is not really related to ecs, it applies to any body deploying their apps on docker.

I would suggest separating the containers, one for nginx and one for API. if they need to be co-located on the same instance, on ECS you can define them as part of the same task and on kubernetes you can make them part of same pod. Define a docker link between the nginx and the api container. This will allow the nginx process to talk to api container without the api container exposing its ports to the host.

One advantage of using the container running platforms such as kubernetes and ecs is that they ensure each of the container run all the time and dynamically restart if one of the processes/containers go down.

Separating the containers will allow these platforms to monitor both the processes separately. When you combine the two into one container the docker container can only run with one of the processes in foreground, so you will loose the advantage of auto-healing for one of the processes.

Also moving from nginx to ELB is not a straightforward solution, you may have redirections and other things configured on the nginx, which are not available on ELB(As of date). If you also need the ELB, there is no harm in forwarding the requests from the ELB to the nginx port.

Upvotes: 1

cylon-v
cylon-v

Reputation: 101

Do not use ECS - it's too crude. I was using it as a platform for our staging/production environments and had odd problems during deployments - sometimes it worked well, sometimes - not (with the same Docker images). ECS provides not clear model of container deployment and maintenance.

There is another good, stable and predictive option - Docker Cloud service. It's new tool (a.k.a. Tutum) that was acquired by Docker. I switched the CI/CD to use it and we're happy with it.

  1. Bind Amazon user credentials to Docker Cloud account. Docker Cloud uses AWS (or other provider) API for creating appropriate computer instances.
  2. Create Node. Select Amazon EC2 instance type and parameters of storage, security group and so on. New instance will contain installed docker software and managing container that handles messages from Docker Cloud (deploy, destroy and others).
  3. Create Stackfile, see https://docs.docker.com/docker-cloud/apps/stack-yaml-reference/. Stackfile is a definition of container group you required. You can define different scaling/distribution models for your containers using specific Stackfile options like deployment strategy, see https://docs.docker.com/docker-cloud/apps/stack-yaml-reference/#deployment-strategy-1.
  4. Define ELB configurations in AWS for your new instances.

P.S. I'm not a member of Docker team and I like other AWS services :).

Upvotes: 1

mcheshier
mcheshier

Reputation: 745

I would avoid including too much in a single container. Try and distill your containers down to one process doing one thing. If all you're doing is serving up a REST API for consumption by your front end, just put the essential pieces in for that and no more.

In my experience you also want your ECS tasks to be able to handle failure gracefully and restart, and the more complicated your containers are the harder this is to get right.

Depending on your requirements I would look into using ELB instead of nginx, you can have your ECS cluster point at an ELB and not have to deal with that piece at all.

Upvotes: 1

Related Questions